Notes from Hip Hop MC-turned Indigenous Librarian Alex Soto on Archiving and Accessing Indigenous Cultural Knowledge
Release Date: 10/01/2024
Privacy On The Ground
If you’re looking for an accessible overview of how C2PA - aka Content Credentials - works technically and how it relates to privacy, identity, and trust, this is it! In our first episode in this explainer series about C2PA, we shared an overview of how C2PA works, and what it does in relation to privacy, identity, and trust. In this second episode, we dig deeper into Identity in C2PA. You’ll learn about what sorts of users want identity to be attached to content metadata – and who doesn’t, and about the risks and tradeoffs of identity in relation to C2PA. And you’ll hear from Pam...
info_outlinePrivacy On The Ground
If you’re looking for an accessible overview of how C2PA works technically and how it relates to privacy, identity, and trust, this is it! Imagine a system that automatically generates detailed data showing where the digital images, videos and documents we encounter came from, who made them, how they have changed, who owns the rights to their use, and even whether AI was used in their creation. Some say C2PA (Coalition for Content Provenance and Authenticity) promises to be just that. C2PA is a technical framework for connecting digital media content such as images and videos to data...
info_outlinePrivacy On The Ground
We know from World Privacy Forum’s 2023 report on AI governance tools, Risky Analysis, that these tools can have problems and should be assessed before they're deployed. But what do we learn about AI governance tools when they are actually put to use? This was the focus of recent research discussed in a paper by World Privacy Forum deputy director Kate Kaye. In this short episode of Privacy on the Ground, Kaye discusses her research which she presented recently at the fourth European Workshop on AI Fairness, an academic conference in The Netherlands. This episode of Privacy on the Ground...
info_outlinePrivacy On The Ground
Will Tao knows first-hand how automated, algorithmic and machine learning systems used by Canada’s government affect lives. The Founder of Heron Law Offices in Burnaby, British Columbia and co-founder of AIMICI (the AI Monitor for Immigration in Canada and Internationally) practices immigration, refugee and citizenship law in Canada. He has watched as these systems automatically determine or inform decisions affecting the lives of his clients sometimes influencing decisions about whether they can legally work, and even whether they must separate from their spouses or children. In this...
info_outlinePrivacy On The Ground
When governments create AI governance policy tools, how are they used in real-world situations? What does the process of assessing a machine learning model used by a government agency look like? In this episode of Privacy on the Ground, you’ll hear all about it from an insider: Mariana Germán, a researcher in the Ethical Algorithms Project at GobLab UAI, the public innovation laboratory at Chile’s Universidad Adolfo Ibáñez’s School of Government. Germán and the team at GobLab helped assess a machine learning model in development for use to help decide medical claims at Chile’s...
info_outlinePrivacy On The Ground
Inside Chile’s Department of Social Security Superintendence — the country’s social security and medical insurance agency — medical claims processors hold the livelihoods and future health of thousands of people in their hands. They are responsible for deciding whether or not the government should pay wages when workers are on medical leave or cover other expenses such as occupational mental health related costs. Like many government agencies these days, the agency, known by its acronym SUSESO, has begun to use machine learning models to help its limited staff process a high...
info_outlinePrivacy On The Ground
There’s no shortage of principles and policies for governing AI from governments and NGOs around the world. But how do they put those principles and policies into practice? It’s that practical side of AI governance that has been a key focus of our work at World Privacy Forum for more than two years. Rather than look only at government policies, in early 2023 we went layers deeper, looking at the tools that governments and NGOs around the world—from Canada to Chile to Ghana to New Zealand to Singapore—have developed for actually implementing those AI policies. Since then, we...
info_outlinePrivacy On The Ground
Emotion recognition is baked into all sorts of software and systems many of us use or experience every day, from video call systems measuring the “mood” at a work meeting, to systems used to gauge distraction at school, or impairment or anger of drivers inside their cars. Despite its increasing proliferation, emotion recognition systems and the data use embedded in them create significant privacy impacts. What is emotion recognition? Would fixing inaccuracy problems in these systems alleviate the potential harms they enable? Should emotion related data be recognized as a sensitive...
info_outlinePrivacy On The Ground
In this episode of World Privacy Forum’s Privacy on the Ground, Māori language expert and educator Te Mihinga Komene shares positive and problematic experiences working with tech companies to help build and correct Māori language translation and learning systems. Komene also discusses extractive data collection practices in AI, and why she hopes her scholarly research will help ensure the Māori language flourishes in the generative AI era. She was interviewed by World Privacy Forum Deputy Director Kate Kaye in June 2024 in Rio de Janeiro at the FAccT conference on fairness,...
info_outlinePrivacy On The Ground
There are often disconnects between data protection policy and actual practice on the ground, especially when policy established on a regional or international level is intended to meet the needs of local communities. Eric Hardy is no stranger to this reality. In his role at the Labriola National American Indian Data Center, an Indigenous library at Arizona State University, Hardy is in the thick of it, working out the everyday practical ways that Indigenous Data Sovereignty policies intersect with the priorities of the library and its tribal communities – both on campus at ASU and beyond....
info_outlineTurning what Alex Soto refers to as sometimes “lofty, grand” theoretical Indigenous Data Sovereignty principles and protocols into practice can be mundane, even tedious. It could require combing through hundreds of years-worth of paper documents, photos, oral histories of sensitive cultural knowledge in various formats, and other materials. It requires dedicated investments in time and money, and it requires on-the-ground communication and connection with tribal communities. In this talk recorded in April 2024 at the Labriola National American Indian Data Center, an Indigenous library on Arizona State University’s Tempe campus near Phoenix, where Soto serves as its first native director, he discussed what this day-to-day reality entails, where the gaps between policy and practice emerge, and what it might take to bring them closer together. This episode of Privacy on the Ground is part of a series from World Privacy Forum exploring Indigenous Data topics through talks with Indigenous leaders who are guiding pathways toward implementing sometimes-theoretical Indigenous data principles in real life practice. Don’t miss the treat at the end of this episode, when Soto, a hip hop MC with a longtime interest in socially-conscious music, spits a rhyme.