loader from loading.io

Lawyer Will Tao on the Real-World Impacts of AI and Canada’s Algorithmic Impact Assessments on Immigrants

Privacy On The Ground

Release Date: 07/12/2025

Privacy, Identity and Trust in C2PA, An Explainer Series - Part II: Identity in C2PA show art Privacy, Identity and Trust in C2PA, An Explainer Series - Part II: Identity in C2PA

Privacy On The Ground

If you’re looking for an accessible overview of how C2PA - aka Content Credentials - works technically and how it relates to privacy, identity, and trust, this is it! In our first episode in this explainer series about C2PA, we shared an overview of how C2PA works, and what it does in relation to privacy, identity, and trust. In this second episode, we dig deeper into Identity in C2PA. You’ll learn about what sorts of users want identity to be attached to content metadata – and who doesn’t, and about the risks and tradeoffs of identity in relation to C2PA. And you’ll hear from Pam...

info_outline
Privacy, Identity and Trust in C2PA: An Explainer Series (Part 1) show art Privacy, Identity and Trust in C2PA: An Explainer Series (Part 1)

Privacy On The Ground

If you’re looking for an accessible overview of how C2PA works technically and how it relates to privacy, identity, and trust, this is it! Imagine a system that automatically generates detailed data showing where the digital images, videos and documents we encounter came from, who made them, how they have changed, who owns the rights to their use, and even whether AI was used in their creation. Some say C2PA (Coalition for Content Provenance and Authenticity) promises to be just that.  C2PA is a technical framework for connecting digital media content such as images and videos to data...

info_outline
What We Learn When We Put AI Governance Tools to Use show art What We Learn When We Put AI Governance Tools to Use

Privacy On The Ground

We know from World Privacy Forum’s 2023 report on AI governance tools, Risky Analysis, that these tools can have problems and should be assessed before they're deployed. But what do we learn about AI governance tools when they are actually put to use? This was the focus of recent research discussed in a paper by World Privacy Forum deputy director Kate Kaye. In this short episode of Privacy on the Ground, Kaye discusses her research which she presented recently at the fourth European Workshop on AI Fairness, an academic conference in The Netherlands. This episode of Privacy on the Ground...

info_outline
Lawyer Will Tao on the Real-World Impacts of AI and Canada’s Algorithmic Impact Assessments on Immigrants show art Lawyer Will Tao on the Real-World Impacts of AI and Canada’s Algorithmic Impact Assessments on Immigrants

Privacy On The Ground

Will Tao knows first-hand how automated, algorithmic and machine learning systems used by Canada’s government affect lives. The Founder of Heron Law Offices in Burnaby, British Columbia and co-founder of AIMICI (the AI Monitor for Immigration in Canada and Internationally) practices immigration, refugee and citizenship law in Canada. He has watched as these systems automatically determine or inform decisions affecting the lives of his clients sometimes influencing decisions about whether they can legally work, and even whether they must separate from their spouses or children. In this...

info_outline
Assessing Chile's Medical Claims Model: An AI Governance Metrics Deep Dive with Mariana Germán show art Assessing Chile's Medical Claims Model: An AI Governance Metrics Deep Dive with Mariana Germán

Privacy On The Ground

When governments create AI governance policy tools, how are they used in real-world situations? What does the process of assessing a machine learning model used by a government agency look like? In this episode of Privacy on the Ground, you’ll hear all about it from an insider: Mariana Germán, a researcher in the Ethical Algorithms Project at GobLab UAI, the public innovation laboratory at Chile’s Universidad Adolfo Ibáñez’s School of Government. Germán and the team at GobLab helped assess a machine learning model in development for use to help decide medical claims at Chile’s...

info_outline
Why Rodrigo Moya Changed His Mind about Chile’s AI Governance Tool for Assessing a Medical Insurance Claims AI Model show art Why Rodrigo Moya Changed His Mind about Chile’s AI Governance Tool for Assessing a Medical Insurance Claims AI Model

Privacy On The Ground

Inside Chile’s Department of Social Security Superintendence — the country’s social security and medical insurance agency — medical claims processors hold the livelihoods and future health of thousands of people in their hands. They are responsible for deciding whether or not the government should pay wages when workers are on medical leave or cover other expenses such as occupational mental health related costs.  Like many government agencies these days, the agency, known by its acronym SUSESO, has begun to use machine learning models to help its limited staff process a high...

info_outline
How AI Governance Tools Put Policy into Practice in Canada and Chile show art How AI Governance Tools Put Policy into Practice in Canada and Chile

Privacy On The Ground

There’s no shortage of principles and policies for governing AI from governments and NGOs around the world. But how do they put those principles and policies into practice? It’s that practical side of AI governance that has been a key focus of our work at World Privacy Forum for more than two years.  Rather than look only at government policies, in early 2023 we went layers deeper, looking at the tools that governments and NGOs around the world—from Canada to Chile to Ghana to New Zealand to Singapore—have developed for actually implementing those AI policies.  Since then, we...

info_outline
Emotion Recognition and What Nazanin Andalibi's Research Tells Us about Its Impacts show art Emotion Recognition and What Nazanin Andalibi's Research Tells Us about Its Impacts

Privacy On The Ground

Emotion recognition is baked into all sorts of software and systems many of us use or experience every day, from video call systems measuring the “mood” at a work meeting, to systems used to gauge distraction at school, or impairment or anger of drivers inside their cars. Despite its increasing proliferation, emotion recognition systems and the data use embedded in them create significant privacy impacts.  What is emotion recognition? Would fixing inaccuracy problems in these systems alleviate the potential harms they enable? Should emotion related data be recognized as a sensitive...

info_outline
Te Mihinga Komene on Ensuring Māori Language Data Flourishes in the Generative AI Era show art Te Mihinga Komene on Ensuring Māori Language Data Flourishes in the Generative AI Era

Privacy On The Ground

In this episode of World Privacy Forum’s Privacy on the Ground, Māori language expert and educator Te Mihinga Komene shares positive and problematic experiences working with tech companies to help build and correct Māori language translation and learning systems. Komene also discusses extractive data collection practices in AI, and why she hopes her scholarly research will help ensure the Māori language flourishes in the generative AI era. She was interviewed by World Privacy Forum Deputy Director Kate Kaye in June 2024 in Rio de Janeiro at the FAccT conference on fairness,...

info_outline
How Eric Hardy Navigates Tensions Between Local Tribal Community Needs and Indigenous Data Sovereignty Policy Goals show art How Eric Hardy Navigates Tensions Between Local Tribal Community Needs and Indigenous Data Sovereignty Policy Goals

Privacy On The Ground

There are often disconnects between data protection policy and actual practice on the ground, especially when policy established on a regional or international level is intended to meet the needs of local communities. Eric Hardy is no stranger to this reality. In his role at the Labriola National American Indian Data Center, an Indigenous library at Arizona State University, Hardy is in the thick of it, working out the everyday practical ways that Indigenous Data Sovereignty policies intersect with the priorities of the library and its tribal communities – both on campus at ASU and beyond....

info_outline
 
More Episodes

Will Tao knows first-hand how automated, algorithmic and machine learning systems used by Canada’s government affect lives. The Founder of Heron Law Offices in Burnaby, British Columbia and co-founder of AIMICI (the AI Monitor for Immigration in Canada and Internationally) practices immigration, refugee and citizenship law in Canada. He has watched as these systems automatically determine or inform decisions affecting the lives of his clients sometimes influencing decisions about whether they can legally work, and even whether they must separate from their spouses or children. In this interview recorded in November 2024, World Privacy Forum’s Kate Kaye talks with Tao about how use of algorithmic systems by Canada’s immigration agency affect his clients, his experiences with Canada’s Algorithmic Impact Assessments, and what he hopes to see change in relation to AI use and AI governance in Canada.

Featured in this episode:

  • Will Tao, Founder of Heron Law Offices in Burnaby, British Columbia and co-founder of AIMICI (the AI Monitor for Immigration in Canada and Internationally)
  • Kate Kaye, Deputy Director of World Privacy Forum

This episode of Privacy on the Ground features music by Maciej Sadowski. The Privacy on the Ground intro theme features music by Pangal.