loader from loading.io

Azeem Azhar: Exponential Challenges of Exponential Technologies

The Road to Accountable AI

Release Date: 04/18/2024

Wendy Gonzalez: Managing the Humans in the AI Loop show art Wendy Gonzalez: Managing the Humans in the AI Loop

The Road to Accountable AI

This week, Kevin Werbach is joined by Wendy Gonzalez of Sama, to discuss the intersection of human judgment and artificial intelligence. Sama provides data annotation, testing, model fine-tuning, and related services for computer vision and generative AI. Kevin and Wendy review Sama's history and evolution, and then consider the challenges of maintaining reliability in AI models through validation and human-centric feedback. Wendy addresses concerns about the ethics of employing workers from the developing world for these tass. She then shares insights on Sama's commitment to transparency in...

info_outline
Jessica Lennard: AI Regulation as Part of a Growth Agenda show art Jessica Lennard: AI Regulation as Part of a Growth Agenda

The Road to Accountable AI

The UK is in a unique position in the global AI landscape. It is home to important AI development labs and corporate AI adopters, but its regulatory regime is distinct from both the US and the European Union. In this episode, Kevin Werbach sits down with Jessica Leonard, the Chief Strategy and External Affairs Officer at the UK's Competition and Markets Authority (CMA). Jessica discusses the CMA's role in shaping AI policy against the backdrop of a shifting political and economic landscape, and how it balances promoting innovation with competition and consumer protection. She highlights the...

info_outline
Tim O'Reilly: The Values of AI Disclosure show art Tim O'Reilly: The Values of AI Disclosure

The Road to Accountable AI

In this episode, Kevin speaks with with the influential tech thinker Tim O’Reilly, founder and CEO of O’Reilly Media and popularizer of terms such as open source and Web 2.0. O'Reilly, who co-leads the AI Disclosures Project at the Social Science Research Council, offers an insightful and historically-informed take on AI governance. Tim and Kevin first explore the evolution of AI, tracing its roots from early computing innovations like ENIAC to its current transformative role Tim notes the centralization of AI development, the critical role of data access, and the costs of creating...

info_outline
Alice Xiang: Connecting Research and Practice for Responsible AI show art Alice Xiang: Connecting Research and Practice for Responsible AI

The Road to Accountable AI

Join Professor Werbach in his conversation with Alice Xiang, Global Head of AI Ethics at Sony and Lead Research Scientist at Sony AI. With both a research and corporate background, Alice provides an inside look at how her team integrates AI ethics across Sony's diverse business units. She explains how the evolving landscape of AI ethics is both a challenge and an opportunity for organizations to reposition themselves as the world embraces AI. Alice discusses fairness, bias, and incorporating these ethical ideas in practical business environments. She emphasizes the importance of collaboration,...

info_outline
Krishna Gade: Observing AI Explainability...and Explaining AI Observability show art Krishna Gade: Observing AI Explainability...and Explaining AI Observability

The Road to Accountable AI

Kevin Werbach speaks with Krishna Gade, founder and CEO of Fiddler AI, on the the state of explainability for AI models. One of the big challenges of contemporary AI is understanding just why a system generated a certain output. Fiddler is one of the startups offering tools that help developers and deployers of AI understand what exactly is going on.  In the conversation, Kevin and Krishna explore the importance of explainability in building trust with consumers, companies, and developers, and then dive into the mechanics of Fiddler's approach to the problem. The conversation covers...

info_outline
Angela Zhang: What’s Really Happening with AI (and AI Governance) in China show art Angela Zhang: What’s Really Happening with AI (and AI Governance) in China

The Road to Accountable AI

This week, Professor Werbach is joined by USC Law School professor Angela Zhang, an expert on China's approach to the technology sector. China is both one of the world's largest markets and home to some of the world's leading tech firms, as well as an active ecosystem of AI developers. Yet its relationship to the United States has become increasingly tense. Many in the West see a battle between the US and China to dominate AI, with significant geopolitical implications. In the episodoe, Zhang discusses China’s rapidly evolving tech and AI landscape, and the impact of government policies on...

info_outline
Shae Brown: AI Auditing Gets Real show art Shae Brown: AI Auditing Gets Real

The Road to Accountable AI

Professor Werbach speaks with Shea Brown, founder of AI auditing firm BABL AI. Brown discusses how his work as an astrophysicist led him to and machine learning, and then to the challenge of evaluating AI systems. He explains the skills needed for effective AI auditing and what makes a robust AI audit. Kevin and Shae talk about the growing landscape of AI auditing services and the strategic role of specialized firms like BABL AI. They examine the evolving standards and regulations surrounding AI auditing from local laws to US government initiatives to the European Union's AI Act. Finally,...

info_outline
Kevin Bankston: The Value of Open AI Models show art Kevin Bankston: The Value of Open AI Models

The Road to Accountable AI

This week, Professor Werbach is joined by Kevin Bankston, Senior Advisor on AI Governance for the Center for Democracy & Technology, to discuss the benefits and risks of open weight frontier AI models. They discuss the meaning of open foundation models, how they relate to open source software, how such models could accelerate technological advancement, and the debate over their risks and need for restrictions. Bankston discusses the National Telecommunications and Information Administration's recent recommendations on open weight models, and CDT's response to the request for comments....

info_outline
Lara Abrash: How Organizations Can Meet the AI Challenge show art Lara Abrash: How Organizations Can Meet the AI Challenge

The Road to Accountable AI

In this episode, Professor Kevin Werbach sits with Lara Abrash, Chair of Deloitte US. Lara and Kevin discuss the complexities of integrating generative AI systems into companies and aligning stakeholders in making AI trustworthy. They discuss how to address bias, and the ways Deloitte promotes trust throughout its organization. Lara explains the role and technological expertise of boards, company risk management, and the global regulatory environment. Finally, Lara discusses the ways in which Deloitte handles both its people and the services they provide.  Lara Abrash is the Chair of...

info_outline
Adam Thierer: Where AI Regulation Can Go Wrong show art Adam Thierer: Where AI Regulation Can Go Wrong

The Road to Accountable AI

Professor Werbach speaks with Adam Thierer, senior fellow for Technology and Innovation at R Street Institute. Adam and Kevin highligh developments in AI regulation on the state, federal, and international scale, and discuss both the benefits and dangers of regulatory engagement in the area. They consider the notion of AI as a “field-of-fields,” and the value of a sectoral approach to regulation, looking back to the development of regulatory approaches for the internet. Adam discusses what types of AI regulations can best balance accountability with innovation, protecting smaller AI...

info_outline
 
More Episodes

Professor Werbach is joined by Azeem Azhar, a leading expert in exponential technologies, for a riveting conversation on the trajectory of AI, regulation, and the larger challenges of concentration in the tech sector. They traverse Azeem’s professional journey, highlighting the pivotal moments in AI development, such as the rise of deep learning, and discuss the implications for business leaders now at the helm of these potent tools. Drawing parallels with historical tech calamities, they examine the safety challenges inherent in large language models and how companies like Google and OpenAI juggle the race for innovation with the necessity for thorough testing. The conversation then delves into the murky waters of regulation and the tug-of-war between progress and control, with a spotlight on the EU's Digital Markets Act and its impact on global tech firms. 

Azeem Azhar is the author of the bestseller "Exponential: How Accelerating Technology is Leaving Us Behind and What to Do About It", which quickly became an Amazon bestseller in Geopolitics upon its release. As the founder of the data analytics firm PeerIndex, later acquired by Brandwatch, Azeem has a proven track record as an angel investor, with investments in over 30 startups, including early-stage companies in AI, renewable energy, and female healthtech. Some of his most notable interviews include discussions with OpenAI CEO Sam Altman, co-founder and CEO of Anthropic Dario Amodei, and legendary Silicon Valley investor Vinod Khosla. These conversations cover a wide range of topics, including the implications of AI on ownership of thoughts, the potential impact of AI on global inequality, and the need to change assumptions about conflict to avoid a second Cold War. His ability to break down complex technological concepts and their societal implications has earned him recognition as a global futurist and exponential thinker, making his contributions invaluable for understanding the rapidly evolving technological landscape. 

Exponential View, Azeem’s Substack and community

Azeem’s book The Exponential Age

Azeem and Sam Altman's Discussion

EU AI Act