loader from loading.io

Kevin Bankston: The Value of Open AI Models

The Road to Accountable AI

Release Date: 10/17/2024

Ashley Casovan: From Privacy Practice to AI Governance show art Ashley Casovan: From Privacy Practice to AI Governance

The Road to Accountable AI

Professor Werbach talks with Ashley Casavan, Managing Director of the AI Governance Center at the IAPP, the global association for privacy professional and related roles. Ashley shares how privacy, data protection, and AI governance are converging, and why professionals must combine technical, policy, and risk expertise. They discuss efforts to build a skills competency framework for AI roles and examine the evolving global regulatory landscape—from the EU’s AI Act to U.S. state-level initiatives. Drawing on Ashley’s experience in the Canadian government, the episode also explores...

info_outline
Lauren Wagner: The Potential of Private AI Governance show art Lauren Wagner: The Potential of Private AI Governance

The Road to Accountable AI

Kevin Werbach interviews Lauren Wagner, a builder and advocate for market-driven approaches to AI governance. Lauren shares insights from her experiences at Google and Meta, emphasizing the critical intersection of technology, policy, and trust-building. She describes the private AI governance model, and the incentives for private-sector incentives and transparency measures, such as enhanced model cards, to guide responsible AI development without heavy-handed regulation. Lauren also explores ongoing challenges around liability, insurance, and government involvement, highlighting the potential...

info_outline
Medha Bankhwal and Michael Chui: Implementing AI Trust show art Medha Bankhwal and Michael Chui: Implementing AI Trust

The Road to Accountable AI

Kevin Werbach speaks with Medha Bankhwal and Michael Chui from QuantumBlack, the AI division of the global consulting firm McKinsey. They discuss how McKinsey's AI work has evolved from strategy consulting to hands-on implementation, with AI trust now embedded throughout their client engagements. Chui highlights what makes the current AI moment transformative, while Bankwhal shares insights from McKinsey's recent AI survey of over 760 organizations across 38 countries. As they explain, trust remains a major barrier to AI adoption, although there are geographic differences in AI governance...

info_outline
Eric Bradlow: AI Goes to Business School show art Eric Bradlow: AI Goes to Business School

The Road to Accountable AI

Kevin Werbach speaks with Eric Bradlow, Vice Dean of AI & Analytics at Wharton. Bradlow highlights the transformative impacts of AI from his perspective as an applied statistician and quantitative marketing expert. He describes the distinctive approach of Wharton's analytics program, and its recent evolution with the rise of AI. The conversation highlights the significance of legal and ethical responsibility within the AI field, and the genesis of the new Wharton Accountable AI Lab. Werbach and Bradlow then examine the role of academic institutions in shaping the future of AI, and how...

info_outline
Wendy Gonzalez: Managing the Humans in the AI Loop show art Wendy Gonzalez: Managing the Humans in the AI Loop

The Road to Accountable AI

This week, Kevin Werbach is joined by Wendy Gonzalez of Sama, to discuss the intersection of human judgment and artificial intelligence. Sama provides data annotation, testing, model fine-tuning, and related services for computer vision and generative AI. Kevin and Wendy review Sama's history and evolution, and then consider the challenges of maintaining reliability in AI models through validation and human-centric feedback. Wendy addresses concerns about the ethics of employing workers from the developing world for these tass. She then shares insights on Sama's commitment to transparency in...

info_outline
Jessica Lennard: AI Regulation as Part of a Growth Agenda show art Jessica Lennard: AI Regulation as Part of a Growth Agenda

The Road to Accountable AI

The UK is in a unique position in the global AI landscape. It is home to important AI development labs and corporate AI adopters, but its regulatory regime is distinct from both the US and the European Union. In this episode, Kevin Werbach sits down with Jessica Leonard, the Chief Strategy and External Affairs Officer at the UK's Competition and Markets Authority (CMA). Jessica discusses the CMA's role in shaping AI policy against the backdrop of a shifting political and economic landscape, and how it balances promoting innovation with competition and consumer protection. She highlights the...

info_outline
Tim O'Reilly: The Values of AI Disclosure show art Tim O'Reilly: The Values of AI Disclosure

The Road to Accountable AI

In this episode, Kevin speaks with with the influential tech thinker Tim O’Reilly, founder and CEO of O’Reilly Media and popularizer of terms such as open source and Web 2.0. O'Reilly, who co-leads the AI Disclosures Project at the Social Science Research Council, offers an insightful and historically-informed take on AI governance. Tim and Kevin first explore the evolution of AI, tracing its roots from early computing innovations like ENIAC to its current transformative role Tim notes the centralization of AI development, the critical role of data access, and the costs of creating...

info_outline
Alice Xiang: Connecting Research and Practice for Responsible AI show art Alice Xiang: Connecting Research and Practice for Responsible AI

The Road to Accountable AI

Join Professor Werbach in his conversation with Alice Xiang, Global Head of AI Ethics at Sony and Lead Research Scientist at Sony AI. With both a research and corporate background, Alice provides an inside look at how her team integrates AI ethics across Sony's diverse business units. She explains how the evolving landscape of AI ethics is both a challenge and an opportunity for organizations to reposition themselves as the world embraces AI. Alice discusses fairness, bias, and incorporating these ethical ideas in practical business environments. She emphasizes the importance of collaboration,...

info_outline
Krishna Gade: Observing AI Explainability...and Explaining AI Observability show art Krishna Gade: Observing AI Explainability...and Explaining AI Observability

The Road to Accountable AI

Kevin Werbach speaks with Krishna Gade, founder and CEO of Fiddler AI, on the the state of explainability for AI models. One of the big challenges of contemporary AI is understanding just why a system generated a certain output. Fiddler is one of the startups offering tools that help developers and deployers of AI understand what exactly is going on.  In the conversation, Kevin and Krishna explore the importance of explainability in building trust with consumers, companies, and developers, and then dive into the mechanics of Fiddler's approach to the problem. The conversation covers...

info_outline
Angela Zhang: What’s Really Happening with AI (and AI Governance) in China show art Angela Zhang: What’s Really Happening with AI (and AI Governance) in China

The Road to Accountable AI

This week, Professor Werbach is joined by USC Law School professor Angela Zhang, an expert on China's approach to the technology sector. China is both one of the world's largest markets and home to some of the world's leading tech firms, as well as an active ecosystem of AI developers. Yet its relationship to the United States has become increasingly tense. Many in the West see a battle between the US and China to dominate AI, with significant geopolitical implications. In the episodoe, Zhang discusses China’s rapidly evolving tech and AI landscape, and the impact of government policies on...

info_outline
 
More Episodes

This week, Professor Werbach is joined by Kevin Bankston, Senior Advisor on AI Governance for the Center for Democracy & Technology, to discuss the benefits and risks of open weight frontier AI models. They discuss the meaning of open foundation models, how they relate to open source software, how such models could accelerate technological advancement, and the debate over their risks and need for restrictions. Bankston discusses the National Telecommunications and Information Administration's recent recommendations on open weight models, and CDT's response to the request for comments. Bankston also shares insights based on his prior work as AI Policy Director at Meta, and discusses national security concerns around China's ability to exploit open AI models. 

Kevin Bankston is Senior Advisor on AI Governance for the Center for Democracy & Technology, supporting CDT’s AI Governance Lab. In addition to a prior term as Director of CDT’s Free Expression Project, he has worked on internet privacy and related policy issues at the American Civil Liberties Union, Electronic Frontier Foundation, the Open Technology Institute, and Meta Platfrms. He was named by Washingtonian magazine as one of DC’s 100 top tech leaders of 2017. Kevin serves as an adjunct professor at the Georgetown University Law Center, where he teaches on the emerging law and policy around generative AI. 

CDT Comments to NTIA on Open Foundation Models by Kevin Bankston

 CDT Submits Comment on AISI's Draft Guidance, "Managing Misuse Risk for Dual-Use Foundation Models"

Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.