University of Zurich Researchers Conducted an AI Persuasion Experiment on Members of This Online Community, Without Consent
Release Date: 08/04/2025
Community Signal
In March, the volunteer moderators of the learned that researchers at the University of Zurich had been covertly conducting an experiment on their community members. By injecting AI-generated comments and posts into conversations, the researchers had wanted to measure the persuasiveness of AI. There was one big problem: They didn’t tell community members that they were being experimented on. They didn’t tell the community moderators. They didn’t tell Reddit’s corporate team. Only when they were getting ready to publish, did they disclose their actions. It then became clear...
info_outlineCommunity Signal
When private equity buys online community platforms, who wins? What about if those platforms were built on open source software? Does the company continue to be a good citizen of the open source community that helped build the product? History has shown us that it is often the community managers and pros who lose. They might not just lose a good platform though, they might lose their job. has an interesting perspective on this topic. He joined Vanilla Forums, an open source community software platform, as a senior developer in 2011, having already used it for a couple of years. He left...
info_outlineCommunity Signal
is a legend of the online community profession. After 30 years, she has retired. But what does it mean when we retire from this work? Her career began AOL in 1994, building communities and managing a massive volunteer program. Among her numerous stops, Rebecca found a focus in child safety, leading such efforts for Sulake (the company behind Habbo Hotels and Disney’s Virtual Magic Kingdom), Mind Candy (Moshi Monsters), and most recently SuperAwesome, a provider of tools for safer, responsible digital engagement with young people, who was acquired by Epic Games. A program manager for...
info_outlineCommunity Signal
Online community consultants aren’t unlike consultants for any other area of work. Some are ethical, smart, and talented, and some aren’t. Consultants also don’t often make great guests for the show because they view it as yet another lead generational funnel for them to shout generalities into. But hopefully an exception is this episode with community consultant . On it, we discuss how being humble is often at odds with how many consultants promote themselves, as they place a certain importance on appearing authoritative and revelatory, even if that isn’t actually correct in the...
info_outlineCommunity Signal
Bodies aren’t moderated equally on the internet. Content moderation efforts, especially those at large, mainstream platforms, can suffer from policy-based bias that results in moderation centering a cisgender gaze. This reinforcing of heteronormativity can leave some of your most vulnerable community members – and potential community members – feeling alienated, ostracized, and simply unwelcome. Last year, in her role as CX escalations supervisor at , Vanity Brown co-authored a whitepaper, . Insightful, with a straight forward approach to making content moderation just a bit...
info_outlineCommunity Signal
Safeguarding is a term used in Ireland and the United Kingdom that covers efforts to protect the health, wellbeing, and human rights of people, especially children and those who are otherwise vulnerable. At , four people alternate by week as the safeguarding lead, helping to protect those that the charity comes in contact with. One of them is Josh Poncil, the online community and learning manager. Among his responsibilities is . On this episode, we talk about safeguarding and knowing if you’ve done the right thing at the end of the day, plus: What is considered “too technical”...
info_outlineCommunity Signal
Employee resource groups (ERGs) can do a lot to create a greater sense of belonging at your organization. But the folks who volunteer to lead these groups may find themselves in need of help when it comes to utilizing perhaps the greatest tool at their disposal: Your internal employee community platform. As a community strategist within large organizations, has trained employees to help them get the most out of these platforms. She has also managed two large migrations, both from Jive, and that has led her to have a (in her words) cynical perspective on the resources made available...
info_outlineCommunity Signal
As we celebrate Community Signal’s 7th birthday, Patrick takes questions from Community Signal listeners and supporters in this first ever “Ask Patrick Anything” episode of the show. Questions include: If everything had worked with CNN+, what would community look like for the platform? Would you rather be a working community professional or a community consultant? Will we ever see community leaders in the C-suite as the norm? 2023 will be Patrick’s 25th year of community work, so this is an opportunity to reflect on that passage of time. A lot has changed and, surprisingly, some...
info_outlineCommunity Signal
Elon Musk’s presence has loomed over Twitter since he announced plans to purchase the platform. And for these few weeks that he’s been in charge, many concerns have proven to be justified. Musk , and then . He is . The verification process, perhaps one of Twitter’s most trusted features, has been unraveled. He’s offered severance to those who don’t want to be part of Following the results of a Twitter poll, , who was suspended from the platform for his role in inciting the January 6th attacks. So, what happens now? What of the many social movements that...
info_outlineCommunity Signal
As customer base and product offerings have grown, so has its community. The Zendesk community started in 2008, under the support organization, as a space for people to ask and answer questions about using the product. Since then, it has shifted departments multiple times, leading to changes in KPIs and core purpose. , the company’s director of community, joins the show to explain how she has navigated these challenges. Tune in for her approach on thoughtfully managing change and expectations within your community and inside of your organization. Patrick and Nicole also discuss: ...
info_outlineIn March, the volunteer moderators of the Change My View subreddit learned that researchers at the University of Zurich had been covertly conducting an experiment on their community members. By injecting AI-generated comments and posts into conversations, the researchers had wanted to measure the persuasiveness of AI.
There was one big problem: They didn’t tell community members that they were being experimented on. They didn’t tell the community moderators. They didn’t tell Reddit’s corporate team. Only when they were getting ready to publish, did they disclose their actions.
It then became clear that beyond the lack of consent, they had engaged in other questionable behavior: Their AI-written contributions had spanned multiple accounts, pretending to be a rape victim, a trauma counselor focusing on abuse, a Black man opposed to Black Lives Matter, and more.
Community response was swift: Overwhelmingly, members were unhappy. The moderators insisted the research not be published. Reddit threatened legal action. Initially, the researchers were defiant but eventually, they apologized and pledged not to publish the research.
Change My View volunteer moderator Logan MacGregor joins the show to discuss what went on behind the scenes, plus:
- The danger of publishing the research
- Reaction to the apology
- How AI is going to challenge the idea of trusting an online community
Big Quotes
Blame the manipulators, not the members and moderators (1:49): “Manipulation in online communities has existed forever. What’s happening with [AI is] the believability, the speed at which people can do it. … The fault always rests with the person who chooses to manipulate the community. It’s easy to fool people … and to do something that undermines the trust of something. It’s harder to build trust.” -Patrick O’Keefe
Why a promise not to publish was important (13:21): “From my perspective, I think the things that we wanted the most [from the researchers were] an apology and a promise not to publish. The second was really important because we were concerned that if this was published in a peer-review journal … if it was elevated to a prominent journal, that our community, which is supposed to be a protected human space, would now become just another sandbox for researchers. We felt very strongly that it should not be published. … Unfortunately, it didn’t land well.” -Logan MacGregor
When a community leader stands for their community, they often stand for all communities (14:52): “When one community person – a volunteer, a host, a person in this line of work – stands up for their community, they stand up for all communities.” -Patrick O’Keefe
Just because bad comments exist online doesn’t mean new ones won’t cause harm (20:10): “So much of what [the researchers] did to try to prevent harm was to say ‘comments like this happen all the time online, we don’t think that it’s going to cause individual trauma.’ We kind of dispute that because some of the comments are [you] pretending to be a trauma counselor and maybe that could actually cause some harm. … I don’t think they thought enough about community impact until after the community screamed ‘ouch.'” -Logan MacGregor
You can’t just blame AI for this (22:52): “One thing that’s really special about Change My View is that it’s a human space; it’s a decidedly human space. … The University of Zurich is a decidedly human space. What I think is so insidious about AI is it’s caused people to behave in ways that I don’t know we would have, without the stupid thinking machines. Because it’s a toxic influence. Unlike the bots that are invading us daily, that we’re constantly shutting down. …
“That hurts a little bit more than just dealing with bots, because this wasn’t just bots. These are people interacting with other people, and there was a human element there. The researchers are real people. I’m a real person. This happened between real people, and it wasn’t just AI.” -Logan MacGregor
How did the community respond when the experiment was disclosed? (24:47): “I would say there was this collective outrage [from the community]. … It was a unique and singular violation of the ethos of the sub, and it was especially palpable because there are a lot of researchers and research-affiliated people that are fond of the sub. It seemed like: We protect national parks, and we have national monuments – these protected spaces – and it almost felt on that level. Of all the places to do this, why Change My View?” -Logan MacGregor
Researchers can help online communities in this moment, but not if they can’t be trusted (34:13): “One of the things that I worry about when it comes to AI is it’s probably going to chip away … at this idea of having protected online spaces, because if in-person conversations are the only way that you can validate that you’re not talking to a robot, then this thing that we created called the internet, it’s going to cease to have value at all.
“That’s the fear, and I have hope that we’re going to be able to figure out a way to get past that challenge, but I’m scratching my head as to how we would do that. The true tragedy in this whole piece is that the very people that I think are best equipped to help us navigate that space are now distrusted because of this experiment. We need to heal that, and I don’t know how that’s going to happen.” -Logan MacGregor
About Logan MacGregor
Logan MacGregor is a member of the volunteer mod team on r/changemyview. Drawing from a unique blend of experience including social work, administration, program management, project management (including research-based projects), policy, strategic development, and emergency management, Logan is a credentialed Type 3 Planning Section Chief that is planning to complete the Master’s program at the Center for Homeland Defense and Security, with a thesis likely focusing on information campaigns.
Related Links
- Change My View subreddit, where Logan is a volunteer moderator
- Unauthorized Experiment on CMV Involving AI-generated Comments, the announcement made by the moderators revealing the existence of the experiment to the community
- Reddit slams ‘unethical experiment’ that deployed secret AI bots in forum by Vivian Ho for the Washington Post
- CMV AI Experiment Update – Apology Received from Researchers, an update posted by the moderators after researchers apologized
- Don’t Create Fake Accounts on Your Community and Don’t Lie to Your Members by Patrick, discussing how Steve Huffman taught students to create fake accounts in their online communities
- How MetaFilter’s Founder (Successfully) Stepped Away From the Community After 16 Years, the Community Signal episode with the story of Scott Adams impersonating a Scott Adams fan
- ‘Unethical’ AI research on Reddit under fire by Cathleen O’Grady for Science
Transcript
Your Thoughts
If you have any thoughts on this episode that you’d like to share, please leave me a comment or send me an email. Thank you for listening.