The Road to Accountable AI
Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
info_outline
Kay Firth-Butterfield: Using AI Wisely
06/26/2025
Kay Firth-Butterfield: Using AI Wisely
Kevin Werbach interviews Kay Firth-Butterfield about how responsible AI has evolved from a niche concern to a global movement. As the world’s first Chief AI Ethics Officer and former Head of AI at the World Economic Forum, Firth-Butterfield brings deep experience aligning AI with human values. She reflects on the early days of responsible AI—when the field was dominated by philosophical debates—to today, when regulation such as the European Union's AI Act is defining the rules of the road.. Firth-Butterfield highlights the growing trust gap in AI, warning that rapid deployment without safeguards is eroding public confidence. Drawing on her work with Fortune 500 firms and her own cancer journey, she argues for human-centered AI, especially in high-stakes areas like healthcare and law. She also underscores the equity issues tied to biased training data and lack of access in the Global South, noting that AI is now generating data based on historical biases. Despite these challenges, she remains optimistic and calls for greater focus on sustainability, access, and AI literacy across sectors. Kay Firth-Butterfield is the founder and CEO of Good Tech Advisory LLC. She was the world’s first C-suite appointee in AI ethics and was the inaugural Head of AI and Machine Learning at the World Economic Forum from 2017 to 2023. A former judge and barrister, she advises governments and Fortune 500 companies on AI governance and remains affiliated with Doughty Street Chambers in the UK. (Time100 Impact Awards)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/37134515
info_outline
Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate
06/19/2025
Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate
Kevin Werbach interviews Dale Cendali, one of the country’s leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution. Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm’s nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute’s Copyright Restatement project and sits on the Board of the International Trademark Association.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/37039165
info_outline
Brenda Leong: Building AI Law Amid Legal Uncertainty
06/12/2025
Brenda Leong: Building AI Law Amid Legal Uncertainty
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36937440
info_outline
Shameek Kundu: AI Testing and the Quest for Boring Predictability
06/05/2025
Shameek Kundu: AI Testing and the Quest for Boring Predictability
Kevin Werbach interviews Shameek Kundu, Executive Director of AI Verify Foundation, to explore how organizations can ensure AI systems work reliably in real-world contexts. AI Verify, a government-backed nonprofit in Singapore, aims to build scalable, practical testing frameworks to support trustworthy AI adoption. Kundu emphasizes that testing should go beyond models to include entire applications, accounting for their specific environments, risks, and data quality. He draws on lessons from AI Verify’s Global AI Assurance pilot, which matched real-world AI deployers—such as hospitals and banks—with specialized testing firms to develop context-aware testing practices. Kundu explains that the rise of generative AI and widespread model use has expanded risk and complexity, making traditional testing insufficient. Instead, companies must assess whether an AI system performs well in context, using tools like simulation, red teaming, and synthetic data generation, while still relying heavily on human oversight. As AI governance evolves from principles to implementation, Kundu makes a compelling case for technical testing as a backbone of trustworthy AI. Shameek Kundu is Executive Director of the AI Verify Foundation. He previously held senior roles at Standard Chartered Bank, including Group Chief Data Officer and Chief Innovation Officer, and co-founded a startup focused on testing AI systems. Kundu has served on the Bank of England’s AI Forum, Singapore’s FEAT Committee, the Advisory Council on Data and AI Ethics, and the Global Partnership on AI.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36831760
info_outline
Uthman Ali: Responsible AI in a Safety Culture
05/29/2025
Uthman Ali: Responsible AI in a Safety Culture
Host Kevin Werbach interviews Uthman Ali, Global Responsible AI Officer at BP, to delve into the complexities of implementing responsible AI practices within a global energy company. Ali emphasizes how the culture of safety in the industry influences BP's willingness to engage in AI governance. He discusses the necessity of embedding ethical AI principles across all levels of the organization, emphasizing tailored training programs for various employee roles—from casual AI users to data scientists—to ensure a comprehensive understanding of AI’s ethical implications. He also highlights the importance of proactive governance, advocating for the development of ethical policies and procedures that address emerging technologies such as robotics and wearables. Ali’s approach underscores the balance between innovation and ethical responsibility, aiming to foster an environment where AI advancements align with societal values and regulatory standards. Uthman Ali is BP’s first Global Responsible AI Officer, and has been instrumental in establishing the company’s Digital Ethics Center of Excellence. He advises prominent organizations such as the World Economic Forum and the British Standards Institute on AI governance and ethics. Additionally, Ali contributes to research and policy discussions as an advisor to Oxford University's Oxethica spinout and various AI safety institutes. (IEEE Standards Association) (2024 podcast interview)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36739365
info_outline
Karen Hao: Is Imperial AI Inevitable?
05/22/2025
Karen Hao: Is Imperial AI Inevitable?
Kevin Werbach interviews journalist and author Karen Hao about her new book Empire of AI, which chronicles the rise of OpenAI and the broader implications of generative artificial intelligence. Hao reflects on how the ethical challenges of AI have evolved, noting the shift from concerns like data privacy and algorithmic bias to more complex issues such as intellectual property violations, environmental impact, misleading user experiences, and concentration of power. She emphasizes that while some technical solutions exist, they are rarely implemented by developers, and foundational harms often occur before tools reach end users. Hao argues that OpenAI’s trajectory was not inevitable but instead the result of specific ideological beliefs, aggressive scaling decisions, and CEO Sam Altman’s singular fundraising prowess. She critiques the “pseudo-religious” ideologies underpinning Silicon Valley’s AI push, where utopian and doomer narratives coexist to justify rapid development. Hao outlines a more democratic alternative focused on smaller, task-specific models and stronger regulation to redirect AI’s future trajectory. Karen Hao has written about AI for publications such as The Atlantic, The Wall Street Journal, and MIT Tchnology Review. She was the first journalist to ever profile OpenAI, and leads The AI Spotlight Series, a program with the Pulitzer Center that trains thousands of journalists around the world on how to cover AI. She has also been a fellow with the Harvard Technology and Public Purpose program, the MIT Knight Science Journalism program, and the Pulitzer Center’s AI Accountability Network. She won an American Humanist Media Award in 2024, and an American National Magazine Award in 2022. (The Atlantic, 2023) (Wall St. Journal, 2023) (The Atlantic, 2023) (MIT Technology Review, 2020)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36539235
info_outline
Jaime Banks: How Users Perceive AI Companions
05/15/2025
Jaime Banks: How Users Perceive AI Companions
AI companion applications, which create interactive personas for one-on-one conversations, are incredibly popular. However, they raise a number of challenging ethical, legal, and psychological questions. In this episode, Kevin Werbach speaks with researcher Jaime Banks about how users view their conversations with AI companions, and the implications for governance. Banks shares insights from her research on mind-perception, and how AI companion users engage in a willing suspension of disbelief similar to watching a movie. She highlights both potential benefits and dangers, as well as novel issues such as the real feelings of loss users may experience when a companion app shuts down. Banks advocates for data-driven policy approaches rather than moral panic, suggesting responses such as an "AI user's Bill of Rights" for these services. Jaime Banks is Katchmar-Wilhelm Endowed Professor at the School of Information Studies at Syracuse University. Her research examines human-technological interaction, including social AI, social robots, and videogame avatars. She focuses on relational construals of mind and morality, communication processes, and how media shape our understanding of complex technologies. Her current funded work focuses on social cognition in human-AI companionship and on the effects of humanizing language on moral judgments about AI. (The Guardian, April 2025) (NY Times, October 2024) (Syracuse iSchool video)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36531135
info_outline
Kelly Trindel: AI Governance Across the Enterprise? All in a Day’s Work
05/08/2025
Kelly Trindel: AI Governance Across the Enterprise? All in a Day’s Work
In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday’s legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday’s AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government’s first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly’s influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact. (video masterclass)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36456255
info_outline
David Weinberger: How AI Challenges Our Fundamental Ideas
05/01/2025
David Weinberger: How AI Challenges Our Fundamental Ideas
Professor Werbach interviews David Weinberger, author of several books and a long-time deep thinker on internet trends, about the broader implications of AI on how we understand and interact with the world. They examine the idea that throughout history, dominant technologies—like the printing press, the clock, or the computer—have subtly but profoundly shaped our concepts of knowledge, intelligence, and identity. Weinberger argues that AI, and especially machine learning, represents a new kind of paradigm shift: unlike traditional computing, which requires humans to explicitly encode knowledge in rules and categories, AI systems extract meaning and make predictions from vast numbers of data points without needing to understand or generalize in human terms. He describes how these systems uncover patterns beyond human comprehension—such as identifying heart disease risk from retinal scans—by finding correlations invisible to human experts. Their discussion also grapples with the disquieting implications of this shift, including the erosion of explainability, the difficulty of ensuring fairness when outcomes emerge from opaque models, and the way AI systems reflect and reinforce cultural biases embedded in the data they ingest. The episode closes with a reflection on the tension between decentralization—a value long championed in the internet age—and the current consolidation of AI power in the hands of a few large firms, as well as Weinberger’s controversial take on copyright and data access in training large models. David Weinberger is a pioneering thought-leader about technology's effect on our lives, our businesses, and ideas. He has written several best-selling, award-winning books explaining how AI and the Internet impact how we think the world works, and the implications for business and society. In addition to writing for many leading publications, he has been a writer-in-residence, twice, at Google AI groups, Editor of the Strong Ideas book series for MIT Press, a Fellow at the Harvarrd Berkman-Klein Center for Internet and Society, contributor of dozens of commentaries on NPR's All Things Considered, a strategic marketing VP and consultant, and for six years a Philosophy professor. (Wired) (Harvard Business Review)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36359685
info_outline
Ashley Casovan: From Privacy Practice to AI Governance
04/24/2025
Ashley Casovan: From Privacy Practice to AI Governance
Professor Werbach talks with Ashley Casavan, Managing Director of the AI Governance Center at the IAPP, the global association for privacy professional and related roles. Ashley shares how privacy, data protection, and AI governance are converging, and why professionals must combine technical, policy, and risk expertise. They discuss efforts to build a skills competency framework for AI roles and examine the evolving global regulatory landscape—from the EU’s AI Act to U.S. state-level initiatives. Drawing on Ashley’s experience in the Canadian government, the episode also explores broader societal challenges, including the need for public dialogue and the hidden impacts of automated decision-making. Ashley Casovan serves as the primary thought leader and public voice for the IAPP on AI governance. She has developed expertise in responsible AI, standards, policy, open government and data governance in the public sector at the municipal and federal levels. As the director of data and digital for the government of Canada, Casovan previously led the development of the world’s first national government policy for responsible AI. Casovan served as the Executive Director of the Responsible AI Institute, a member of OECD’s AI Policy Observatory Network of Experts, a member of the World Economic Forum's AI Governance Alliance, an Executive Board Member of the International Centre of Expertise in Montréal on Artificial Intelligence and as a member of the IFIP/IP3 Global Industry Council within the UN.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36209255
info_outline
Lauren Wagner: The Potential of Private AI Governance
04/17/2025
Lauren Wagner: The Potential of Private AI Governance
Kevin Werbach interviews Lauren Wagner, a builder and advocate for market-driven approaches to AI governance. Lauren shares insights from her experiences at Google and Meta, emphasizing the critical intersection of technology, policy, and trust-building. She describes the private AI governance model, and the incentives for private-sector incentives and transparency measures, such as enhanced model cards, to guide responsible AI development without heavy-handed regulation. Lauren also explores ongoing challenges around liability, insurance, and government involvement, highlighting the potential of public procurement policies to set influential standards. Reflecting on California's SB 1047 AI bill, she discusses its drawbacks and praises the inclusive debate it sparked. Lauren concludes by promoting productive collaborations between private enterprises and governments, stressing the importance of transparent, accountable, and pragmatic AI governance approaches. Lauren Wagner is a researcher, operator and investor creating new markets for trustworthy technology. She is currently a Term Member at the Council on Foreign Relations, a Technical & AI Policy Advisor to the Data & Trust Alliance, and an angel investor in startups with a trust & safety edge, particularly AI-driven solutions for regulated markets. She has been a Senior Advisor to Responsible Innovation Labs, an early-stage investor at Link Ventures, and held senior product and marketing roles at Meta and Google. (February 2025) (March 2025)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/36088880
info_outline
Medha Bankhwal and Michael Chui: Implementing AI Trust
04/10/2025
Medha Bankhwal and Michael Chui: Implementing AI Trust
Kevin Werbach speaks with Medha Bankhwal and Michael Chui from QuantumBlack, the AI division of the global consulting firm McKinsey. They discuss how McKinsey's AI work has evolved from strategy consulting to hands-on implementation, with AI trust now embedded throughout their client engagements. Chui highlights what makes the current AI moment transformative, while Bankwhal shares insights from McKinsey's recent AI survey of over 760 organizations across 38 countries. As they explain, trust remains a major barrier to AI adoption, although there are geographic differences in AI governance maturity. Medha Bankhwal, a graduate of Wharton's MBA program, is an Associate Partner, as well as Co-founder of McKinsey’s AI Trust / Responsible AI practice. Prior to McKinsey, Medha was at Google and subsequently co-founded a digital learning not-for-profit startup. She co-leads forums for AI safety discussions for policy + tech practitioners, titled “Trustworthy AI Futures” as well as a community of ex-Googlers dedicated to the topic of AI Safety. Michael Chui is a senior fellow at QuantumBlack, AI by McKinsey. He leads research on the impact of disruptive technologies and innovation on business, the economy, and society. Michael has led McKinsey research in such areas as artificial intelligence, robotics and automation, the future of work, data & analytics, collaboration technologies, the Internet of Things, and biological technologies. (March 12, 2025) (January 28, 2025) (November 26, 2024)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/35910115
info_outline
Eric Bradlow: AI Goes to Business School
04/03/2025
Eric Bradlow: AI Goes to Business School
Kevin Werbach speaks with Eric Bradlow, Vice Dean of AI & Analytics at Wharton. Bradlow highlights the transformative impacts of AI from his perspective as an applied statistician and quantitative marketing expert. He describes the distinctive approach of Wharton's analytics program, and its recent evolution with the rise of AI. The conversation highlights the significance of legal and ethical responsibility within the AI field, and the genesis of the new Wharton Accountable AI Lab. Werbach and Bradlow then examine the role of academic institutions in shaping the future of AI, and how institutions like Wharton can lead the way in promoting accountability, learning and responsible AI deployment. Eric Bradlow is the Vice Dean of AI & Analytics at Wharton, Chair of the Marketing Department, and also a professor of Economics, Education, Statistics, and Data Science. His research interests include Bayesian modeling, statistical computing, and developing new methodology for unique data structures with application to business problems. In addition to publishing in a variety of top journals, he has won numerous teaching awards at Wharton, including the MBA Core Curriculum teaching award, the Miller-Sherrerd MBA Core Teaching Award and the Excellence in Teaching Award. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/35910100
info_outline
Wendy Gonzalez: Managing the Humans in the AI Loop
12/12/2024
Wendy Gonzalez: Managing the Humans in the AI Loop
This week, Kevin Werbach is joined by Wendy Gonzalez of Sama, to discuss the intersection of human judgment and artificial intelligence. Sama provides data annotation, testing, model fine-tuning, and related services for computer vision and generative AI. Kevin and Wendy review Sama's history and evolution, and then consider the challenges of maintaining reliability in AI models through validation and human-centric feedback. Wendy addresses concerns about the ethics of employing workers from the developing world for these tass. She then shares insights on Sama's commitment to transparency in wages, ethical sourcing, and providing opportunities for those facing the greatest employment barriers. Wendy Gonzalez is the CEO Sama. Since taking over 2020, she has led a variety of successes at the company, including launching Machine Learning Assisted Annotation which has improved annotation efficiency by over 300%. Wendy has over two decades of managerial and technology leadership experience for companies including EY, Capgemini Consulting and Cycle30 (acquired by Arrow Electronics), and is an active Board Member of the Leila Janah Foundation.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/34194215
info_outline
Jessica Lennard: AI Regulation as Part of a Growth Agenda
12/05/2024
Jessica Lennard: AI Regulation as Part of a Growth Agenda
The UK is in a unique position in the global AI landscape. It is home to important AI development labs and corporate AI adopters, but its regulatory regime is distinct from both the US and the European Union. In this episode, Kevin Werbach sits down with Jessica Leonard, the Chief Strategy and External Affairs Officer at the UK's Competition and Markets Authority (CMA). Jessica discusses the CMA's role in shaping AI policy against the backdrop of a shifting political and economic landscape, and how it balances promoting innovation with competition and consumer protection. She highlights the guiding principles that the CMA has established to ensure a fair and competitive AI ecosystem, and how they are designed to establish trust and fair practices across the industry. Jessica Lennard took up the role of Chief Strategy & External Affairs Officer at the CMA in August 2023. Jessica is a member of the Senior Executive Team, an advisor to the Board, and has overall responsibility for Strategy, Communications and External Engagement at the CMA. Previously, she was a Senior Director for Global Data and AI Initiatives at VISA. She also served as an Advisory Board Member for the UK Government Centre for Data Ethics and Innovation. (April 2024)
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33954477
info_outline
Tim O'Reilly: The Values of AI Disclosure
11/21/2024
Tim O'Reilly: The Values of AI Disclosure
In this episode, Kevin speaks with with the influential tech thinker Tim O’Reilly, founder and CEO of O’Reilly Media and popularizer of terms such as open source and Web 2.0. O'Reilly, who co-leads the AI Disclosures Project at the Social Science Research Council, offers an insightful and historically-informed take on AI governance. Tim and Kevin first explore the evolution of AI, tracing its roots from early computing innovations like ENIAC to its current transformative role Tim notes the centralization of AI development, the critical role of data access, and the costs of creating advanced models. The conversation then delves into AI ethics and safety, covering issues like fairness, transparency, bias, and the need for robust regulatory frameworks. They also examine the potential for distributed AI systems, cooperative models, and industry-specific applications that leverage specialized datasets. Finally, Tim and Kevin highlight the opportunities and risks inherent in AI's rapid growth, urging collaboration, accountability, and innovative thinking to shape a sustainable and equitable future for the technology. Tim O’Reilly is the founder, CEO, and Chairman of O’Reilly Media, which delivers online learning, publishes books, and runs conferences about cutting-edge technology, and has a history of convening conversations that reshape the computer industry. Tim is also a partner at early stage venture firm O’Reilly AlphaTech Ventures (OATV), and on the boards of Code for America, PeerJ, Civis Analytics, and PopVox. He is the author of many technical books published by O’Reilly Media, and most recently WTF? What’s the Future and Why It’s Up to Us (Harper Business, 2017). SSRC,
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33954467
info_outline
Alice Xiang: Connecting Research and Practice for Responsible AI
11/14/2024
Alice Xiang: Connecting Research and Practice for Responsible AI
Join Professor Werbach in his conversation with Alice Xiang, Global Head of AI Ethics at Sony and Lead Research Scientist at Sony AI. With both a research and corporate background, Alice provides an inside look at how her team integrates AI ethics across Sony's diverse business units. She explains how the evolving landscape of AI ethics is both a challenge and an opportunity for organizations to reposition themselves as the world embraces AI. Alice discusses fairness, bias, and incorporating these ethical ideas in practical business environments. She emphasizes the importance of collaboration, transparency, and diveristy in embedding a culture of accountable AI at Sony, showing other organizations how they can do the same. Alice Xiang manages the team responsible for conducting AI ethics assessments across Sony's business units and implementing Sony's AI Ethics Guidelines. She also recently served as a General Chair for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), the premier multidisciplinary research conference on these topics. Alice previously served on the leadership team of the Partnership on AI. She was a Visiting Scholar at Tsinghua University’s Yau Mathematical Sciences Center, where she taught a course on Algorithmic Fairness, Causal Inference, and the Law. Her work has been quoted in a variety of high profile journals and published in top machine learning conferences, journals, and law reviews.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33349962
info_outline
Krishna Gade: Observing AI Explainability...and Explaining AI Observability
11/07/2024
Krishna Gade: Observing AI Explainability...and Explaining AI Observability
Kevin Werbach speaks with Krishna Gade, founder and CEO of Fiddler AI, on the the state of explainability for AI models. One of the big challenges of contemporary AI is understanding just why a system generated a certain output. Fiddler is one of the startups offering tools that help developers and deployers of AI understand what exactly is going on. In the conversation, Kevin and Krishna explore the importance of explainability in building trust with consumers, companies, and developers, and then dive into the mechanics of Fiddler's approach to the problem. The conversation covers current and potential regulations that mandate or incentivize explainability, and the prospects for AI explainability standards as AI models grow in complexity. Krishna distinguishes explainability from the broader process of observability, including the necessity of maintaining model accuracy through different times and contexts. Finally, Kevin and Krishna discuss the need for proactive AI model monitoring to mitigate business risks and engage stakeholders. Krishna Gade is the founder and CEO of Fiddler AI, an AI Observability startup, which focuses on monitoring, explainability, fairness, and governance for predictive and generative models. An entrepreneur and engineering leader with strong technical experience in creating scalable platforms and delightful products,Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. At Facebook, Krishna led the News Feed Ranking Platform that created the infrastructure for ranking content in News Feed and powered use-cases like Facebook Stories and user recommendations.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33395487
info_outline
Angela Zhang: What’s Really Happening with AI (and AI Governance) in China
10/31/2024
Angela Zhang: What’s Really Happening with AI (and AI Governance) in China
This week, Professor Werbach is joined by USC Law School professor Angela Zhang, an expert on China's approach to the technology sector. China is both one of the world's largest markets and home to some of the world's leading tech firms, as well as an active ecosystem of AI developers. Yet its relationship to the United States has become increasingly tense. Many in the West see a battle between the US and China to dominate AI, with significant geopolitical implications. In the episodoe, Zhang discusses China’s rapidly evolving tech and AI landscape, and the impact of government policies on its development. She dives into what the Chinese government does and doesn’t do in terms of AI regulation, and compares Chinese practices to those in the West. Kevin and Angela consider the implications of US export controls on AI-related technologies, along with the potential for cooperation between the US and China in AI governance. Finally, they look toward the future of Chinese AI including its progress and potential challenges. Angela Huyue Zhang is a Professor of Law at the Gould School of Law of the University of Southern California. She is the author of Chinese Antitrust Exceptionalism: How the Rise of China Challenges Global Regulation which was named one of the Best Political Economy Books of the Year by ProMarket in 2021. Her second book, High Wire: How China Regulates Big Tech and Governs Its Economy, released in March 2024, has been covered in The New York Times, Bloomberg, Wire China, MIT Tech Review and many other international news outlets. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33352637
info_outline
Shae Brown: AI Auditing Gets Real
10/24/2024
Shae Brown: AI Auditing Gets Real
Professor Werbach speaks with Shea Brown, founder of AI auditing firm BABL AI. Brown discusses how his work as an astrophysicist led him to and machine learning, and then to the challenge of evaluating AI systems. He explains the skills needed for effective AI auditing and what makes a robust AI audit. Kevin and Shae talk about the growing landscape of AI auditing services and the strategic role of specialized firms like BABL AI. They examine the evolving standards and regulations surrounding AI auditing from local laws to US government initiatives to the European Union's AI Act. Finally, Kevin and Shae discuss the future of AI auditing, emphasizing the importance of independence. Shea Brown, the founder and CEO of BABL AI, is a researcher, speaker, consultant in AI ethics, and former associate professor of instruction in Astrophysics at the University of Iowa. Founded in 2018, BABL AI has audited and certified AI systems, consulted on responsible AI best practices, and offered online education on related topics. BABL AI’s overall mission is to ensure that all algorithms are developed, deployed, and governed in ways that prioritize human flourishing. Shea is a founding member of the International Association of Algorithmic Auditors (IAAA). Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33349662
info_outline
Kevin Bankston: The Value of Open AI Models
10/17/2024
Kevin Bankston: The Value of Open AI Models
This week, Professor Werbach is joined by Kevin Bankston, Senior Advisor on AI Governance for the Center for Democracy & Technology, to discuss the benefits and risks of open weight frontier AI models. They discuss the meaning of open foundation models, how they relate to open source software, how such models could accelerate technological advancement, and the debate over their risks and need for restrictions. Bankston discusses the National Telecommunications and Information Administration's recent recommendations on open weight models, and CDT's response to the request for comments. Bankston also shares insights based on his prior work as AI Policy Director at Meta, and discusses national security concerns around China's ability to exploit open AI models. Kevin Bankston is Senior Advisor on AI Governance for the Center for Democracy & Technology, supporting CDT’s AI Governance Lab. In addition to a prior term as Director of CDT’s Free Expression Project, he has worked on internet privacy and related policy issues at the American Civil Liberties Union, Electronic Frontier Foundation, the Open Technology Institute, and Meta Platfrms. He was named by Washingtonian magazine as one of DC’s 100 top tech leaders of 2017. Kevin serves as an adjunct professor at the Georgetown University Law Center, where he teaches on the emerging law and policy around generative AI. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33347087
info_outline
Lara Abrash: How Organizations Can Meet the AI Challenge
10/10/2024
Lara Abrash: How Organizations Can Meet the AI Challenge
In this episode, Professor Kevin Werbach sits with Lara Abrash, Chair of Deloitte US. Lara and Kevin discuss the complexities of integrating generative AI systems into companies and aligning stakeholders in making AI trustworthy. They discuss how to address bias, and the ways Deloitte promotes trust throughout its organization. Lara explains the role and technological expertise of boards, company risk management, and the global regulatory environment. Finally, Lara discusses the ways in which Deloitte handles both its people and the services they provide. Lara Abrash is the Chair of Deloitte US, leading the Board of Directors in governing all aspects of the US Firm. Overseeing over 170,000 employees, Lara is a member of Deloitte’s Global Board of Directors and Chair of the Deloitte Foundation. Lara stepped into this role after serving as the chief executive officer of the Deloitte US Audit & Assurance business. Lara frequently speaks on topics focused on advancing the profession including modern leadership traits, diversity, equity, and inclusion, the future of work, and tech disruption. She is a member of the American Institute of Certified Public Accountants and received her MBA from Baruch College. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33055217
info_outline
Adam Thierer: Where AI Regulation Can Go Wrong
10/03/2024
Adam Thierer: Where AI Regulation Can Go Wrong
Professor Werbach speaks with Adam Thierer, senior fellow for Technology and Innovation at R Street Institute. Adam and Kevin highligh developments in AI regulation on the state, federal, and international scale, and discuss both the benefits and dangers of regulatory engagement in the area. They consider the notion of AI as a “field-of-fields,” and the value of a sectoral approach to regulation, looking back to the development of regulatory approaches for the internet. Adam discusses what types of AI regulations can best balance accountability with innovation, protecting smaller AI developers and startups. Adam Thierer specializes in entrepreneurialism, Internet, and free-speech issues, with a focus on emerging technologies. He is a senior fellow for the Technology & Innovation team at R Street Institute, a leading public policy think tank, and previously spent 12 years as a senior fellow at the Mercatus Center at George Mason University. Adam has also worked for the Progress and Freedom Foundation, the Adam Smith Institute, the Heritage Foundation and the Cato Institute. Adam has published 10 books on a wide range of topics, including online child safety, internet governance, intellectual property, telecommunications policy, media regulation and federalism. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33016807
info_outline
Reggie Townsend: The Deliberate and Intentional Path to Trustworthy AI
09/26/2024
Reggie Townsend: The Deliberate and Intentional Path to Trustworthy AI
In this episode, Kevin Werbach is joined by Reggie Townsend, VP of Data Ethics at SAS, an analytics software for business platform. Together they discuss SAS’s nearly 50-year long history of supporting business’s technology and the recent implementation of responsible AI initiatives. Reggie introduces model cards and the importance of variety in AI systems across diverse stakeholders and sectors. Reggie and Kevin explore the increase in both consumer trust and purchases when they feel a brand is ethical in its use of AI and the importance of trustworthy AI in employee retention and recruitment. Their discussion approaches the idea of bias in an untraditional way, highlighting the positive humanistic nature of bias and learning to manage the negative implications. Finally, Reggie shares his insights on fostering ethical AI practices through literacy and open dialogue, stressing the importance of authentic commitment and collaboration among developers, deployers, and regulators. Reggie Townsend oversees the Data Ethics Practice (DEP) at SAS Institute. He leads the global effort for consistency and coordination of strategies that empower employees and customers to deploy data driven systems that promote human well-being, agency and equity. He has over 20 years of experience in strategic planning, management, and consulting focusing on topics such as advanced analytics, cloud computing and artificial intelligence. With visibility across multiple industries and sectors where the use of AI is growing, he combines this extensive business and technology expertise with a passion for equity and human empowerment. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33016697
info_outline
Helen Toner: AI Safety in a World of Uncertainty
09/19/2024
Helen Toner: AI Safety in a World of Uncertainty
Join Professor Kevin Werbach in his discussion with Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology. In this episode, Werbach and Toner discuss how the public views AI safety and ethics and both the positive and negative outcomes of advancements in AI. We discuss Toner’s lessons from the unsuccessful removal of Sam Altman as the CEO of OpenAI, oversight structures to audit and approve the AI companies deploy, and the role of the government in AI accountability. Finally, Toner explains how businesses can take charge of their responsible AI deployment. Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. From 2021-2023, she served on the board of OpenAI, the creator of ChatGPT. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/33016667
info_outline
See you in September for Season 2!
07/18/2024
See you in September for Season 2!
After 16 episodes in Season 1, we're taking a summer break at The Road to Accountable AI. Look for more compelling content on AI governance, regulation, ethics, and responsibility when we return in September.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/32172937
info_outline
Nuala O'Connor: Frontline Human-Machine Interface at Walmart
07/11/2024
Nuala O'Connor: Frontline Human-Machine Interface at Walmart
Join Kevin and Nuala as they discuss Walmart's approach to AI governance, emphasizing the application of existing corporate principles to new technologies. She explains the Walmart Responsible AI Pledge, its collaborative creation process, and the importance of continuous monitoring to ensure AI tools align with corporate values. Nuala reveals her commitment to responsible AI with a focus on customer centricity at Walmart with the mantra “Inform, Educate, Entertain” and examples like the "Ask Sam" tool that aids associates. They address the complexities of AI implementation, including bias, accuracy, and trust, and the challenges of standardizing AI frameworks. Kevin and Nuala conclude with reflections on the need for humility and agility in the evolving AI landscape, emphasizing the ongoing responsibility of technology providers to ensure positive impacts. Nuala O’Connor is the SVP and chief counsel, digital citizenship, at Walmart. Nuala leads the company’s Digital Citizenship organization, which advances the ethical use of data and responsible use of technology. Before joining Walmart, Nuala served as president and CEO of the Center for Democracy and Technology. In the private sector, Nuala has served in a variety of privacy leadership and legal counsel roles at Amazon, GE and DoubleClick. In the public sector, Nuala served as the first chief privacy officer at the U.S. Department of Homeland Security. She also served as deputy director of the Office of Policy and Strategic Planning, and later as chief counsel for technology at the U.S. Department of Commerce. Nuala holds a B.A. from Princeton University, an M.Ed. from Harvard University and a J.D. from Georgetown University Law Center. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/31758305
info_outline
Suresh Venkatasubramanian: Blueprints and Redesign: Academia to the White House and Back
07/04/2024
Suresh Venkatasubramanian: Blueprints and Redesign: Academia to the White House and Back
Join Kevin and Suresh as they discuss the latest tools and frameworks that companies can use to effectively combat algorithmic bias, all while navigating the complexities of integrating AI into organizational strategies. Suresh describes his experiences at the White House Office of Science and Technology Policy and the creation of the Blueprint for an AI Bill of Rights, including its five fundamental principles—safety and effectiveness, non-discrimination, data minimization, transparency, and accountability. Suresh and Kevin dig into the economic and logistical challenges that academics face in government roles and highlight the importance of collaborative efforts alongside clear rules to follow in fostering ethical AI. The discussion highlights the importance of education, cultural shifts, and the role of the European Union's AI Act in shaping global regulatory frameworks. Suresh discusses his creation of Brown University's Center on Technological Responsibility, Reimagination, and Redesign, and why trust and accountability are paramount, especially with the rise of Large Language Models. Suresh Venkatasubramanian is a Professor of Data Science and Computer Science at Brown University. Suresh's background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the impact of automated decision-making systems in society. Prior to Brown University, Suresh was at the University of Utah, where he received a CAREER award from the NSF for his work in the geometry of probability. He has received a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. For the 2021–2022 academic year, he served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/31758297
info_outline
Diya Wynn: People-Centric Technology
06/27/2024
Diya Wynn: People-Centric Technology
Kevin Werbach speaks with Diya Wynn, the responsible AI lead at Amazon Web Services (AWS). Diya shares how she pioneered a formal practice for ethical AI at AWS, and explains AWS’s “Well-Architected” framework to assist customers in responsibly deploying AI. Kevin and Diya also discuss the significance of diversity and human bias in AI systems, revealing the necessity of incorporating diverse perspectives to create more equitable AI outcomes. Diya Wynn leads a team at AWS that helps customers implement responsible AI practices. She has over 25 years of experience as a technologist scaling products for acquisition; driving inclusion, diversity & equity initiatives; and leading operational transformation. She serves on the AWS Health Equity Initiative Review Committee; mentors at Tulane University, Spelman College, and GMI; was a mayoral appointee in Environment Affairs for six years; and guest lectures regularly on responsible and inclusive technology. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/31661377
info_outline
Paula Goldman: Putting Humans at the Helm
06/20/2024
Paula Goldman: Putting Humans at the Helm
Kevin Werbach is joined by Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, to discuss the pioneering efforts of her team in building a culture of ethical technology use. Paula shares insights on aligning risk assessments and technical mitigations with business goals to bring stakeholders on board. She explains how AI governance functions in a large business with enterprise customers, who have distinctive needs and approaches. Finally, she highlights the shift from "human in the loop" to "human at the helm" as AI technology advances, stressing that today's investments in trustworthy AI are essential for managing tomorrow’s more advanced systems. Paula Goldman leads Salesforce in creating a framework to build and deploy ethical technology that optimizes social benefit. Prior to Salesforce, she served Global Lead of the Tech and Society Solutions Lab at Omidyar Network, and has extensive entrepreneurial experience managing frontier market businesses. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
/episode/index/show/524f620b-2515-4e33-b87b-b9eef246c60d/id/31658962