loader from loading.io

Tim O'Reilly: The Values of AI Disclosure

The Road to Accountable AI

Release Date: 11/21/2024

Kay Firth-Butterfield: Using AI Wisely show art Kay Firth-Butterfield: Using AI Wisely

The Road to Accountable AI

Kevin Werbach interviews Kay Firth-Butterfield about how responsible AI has evolved from a niche concern to a global movement. As the world’s first Chief AI Ethics Officer and former Head of AI at the World Economic Forum, Firth-Butterfield brings deep experience aligning AI with human values. She reflects on the early days of responsible AI—when the field was dominated by philosophical debates—to today, when regulation such as the European Union's AI Act is defining the rules of the road.. Firth-Butterfield highlights the growing trust gap in AI, warning that rapid deployment without...

info_outline
Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate show art Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate

The Road to Accountable AI

Kevin Werbach interviews Dale Cendali, one of the country’s leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal...

info_outline
Brenda Leong: Building AI Law Amid Legal Uncertainty show art Brenda Leong: Building AI Law Amid Legal Uncertainty

The Road to Accountable AI

         

info_outline
Shameek Kundu: AI Testing and the Quest for Boring Predictability show art Shameek Kundu: AI Testing and the Quest for Boring Predictability

The Road to Accountable AI

Kevin Werbach interviews Shameek Kundu, Executive Director of AI Verify Foundation, to explore how organizations can ensure AI systems work reliably in real-world contexts. AI Verify, a government-backed nonprofit in Singapore, aims to build scalable, practical testing frameworks to support trustworthy AI adoption. Kundu emphasizes that testing should go beyond models to include entire applications, accounting for their specific environments, risks, and data quality. He draws on lessons from AI Verify’s Global AI Assurance pilot, which matched real-world AI deployers—such as hospitals and...

info_outline
Uthman Ali: Responsible AI in a Safety Culture show art Uthman Ali: Responsible AI in a Safety Culture

The Road to Accountable AI

Host Kevin Werbach interviews Uthman Ali, Global Responsible AI Officer at BP, to delve into the complexities of implementing responsible AI practices within a global energy company. Ali emphasizes how the culture of safety in the industry influences BP's willingness to engage in AI governance. He discusses the necessity of embedding ethical AI principles across all levels of the organization, emphasizing tailored training programs for various employee roles—from casual AI users to data scientists—to ensure a comprehensive understanding of AI’s ethical implications. He also highlights...

info_outline
Karen Hao: Is Imperial AI Inevitable? show art Karen Hao: Is Imperial AI Inevitable?

The Road to Accountable AI

  Kevin Werbach interviews journalist and author Karen Hao about her new book Empire of AI, which chronicles the rise of OpenAI and the broader implications of generative artificial intelligence. Hao reflects on how the ethical challenges of AI have evolved, noting the shift from concerns like data privacy and algorithmic bias to more complex issues such as intellectual property violations, environmental impact, misleading user experiences, and concentration of power. She emphasizes that while some technical solutions exist, they are rarely implemented by developers, and foundational...

info_outline
Jaime Banks: How Users Perceive AI Companions show art Jaime Banks: How Users Perceive AI Companions

The Road to Accountable AI

AI companion applications, which create interactive personas for one-on-one conversations, are incredibly popular. However, they raise a number of challenging ethical, legal, and psychological questions. In this episode, Kevin Werbach speaks with researcher Jaime Banks about how users view their conversations with AI companions, and the implications for governance. Banks shares insights from her research on mind-perception, and how AI companion users engage in a willing suspension of disbelief similar to watching a movie. She highlights both potential benefits and dangers, as well as novel...

info_outline
Kelly Trindel: AI Governance Across the Enterprise? All in a Day’s Work show art Kelly Trindel: AI Governance Across the Enterprise? All in a Day’s Work

The Road to Accountable AI

In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday’s legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the...

info_outline
David Weinberger: How AI Challenges Our Fundamental Ideas show art David Weinberger: How AI Challenges Our Fundamental Ideas

The Road to Accountable AI

Professor Werbach interviews David Weinberger, author of several books and a long-time deep thinker on internet trends, about the broader implications of AI on how we understand and interact with the world. They examine the idea that throughout history, dominant technologies—like the printing press, the clock, or the computer—have subtly but profoundly shaped our concepts of knowledge, intelligence, and identity. Weinberger argues that AI, and especially machine learning, represents a new kind of paradigm shift: unlike traditional computing, which requires humans to explicitly encode...

info_outline
Ashley Casovan: From Privacy Practice to AI Governance show art Ashley Casovan: From Privacy Practice to AI Governance

The Road to Accountable AI

Professor Werbach talks with Ashley Casavan, Managing Director of the AI Governance Center at the IAPP, the global association for privacy professional and related roles. Ashley shares how privacy, data protection, and AI governance are converging, and why professionals must combine technical, policy, and risk expertise. They discuss efforts to build a skills competency framework for AI roles and examine the evolving global regulatory landscape—from the EU’s AI Act to U.S. state-level initiatives. Drawing on Ashley’s experience in the Canadian government, the episode also explores...

info_outline
 
More Episodes

In this episode, Kevin speaks with with the influential tech thinker Tim O’Reilly, founder and CEO of O’Reilly Media and popularizer of terms such as open source and Web 2.0. O'Reilly, who co-leads the AI Disclosures Project at the Social Science Research Council, offers an insightful and historically-informed take on AI governance. Tim and Kevin first explore the evolution of AI, tracing its roots from early computing innovations like ENIAC to its current transformative role Tim notes the centralization of AI development, the critical role of data access, and the costs of creating advanced models. The conversation then delves into AI ethics and safety, covering issues like fairness, transparency, bias, and the need for robust regulatory frameworks. They also examine the potential for distributed AI systems, cooperative models, and industry-specific applications that leverage specialized datasets. Finally, Tim and Kevin highlight the opportunities and risks inherent in AI's rapid growth, urging collaboration, accountability, and innovative thinking to shape a sustainable and equitable future for the technology.

Tim O’Reilly is the founder, CEO, and Chairman of O’Reilly Media, which delivers online learning, publishes books, and runs conferences about cutting-edge technology, and has a history of convening conversations that reshape the computer industry. Tim is also a partner at early stage venture firm O’Reilly AlphaTech Ventures (OATV), and on the boards of Code for America, PeerJ, Civis Analytics, and PopVox. He is the author of many technical books published by O’Reilly Media, and most recently WTF? What’s the Future and Why It’s Up to Us (Harper Business, 2017). 

SSRC, AI Disclosures Project

Asimov's Addendum Substack

The First Step to Proper AI Regulation Is to Make Companies Fully Disclose the Risks