Researchers secretly experimented with Reddit users with comments generated by AI

Rate this post


A group of researchers hiddenly conducted a monthly “unauthorized” experiment in one of Reddit’s most popular communities using AI generated comments to test the conviction of large language models. The experiment that Over the weekend by R/Changemyview moderators, it is described by Reddit Mods as a “psychological manipulation” of unsuspecting users.

“The CMV MOD team must inform the CMV community of an unauthorized experiment conducted by researchers at the University of Zurich for CMV users,” Subreddit moderators wrote in a long post that notify the red dietary dietary diets. “This experiment has unfolded comments generated by AI to study how AI can be used to change views.”

Researchers used LLM to create comments in response to R/Changemyview, Subreddit publications, where Reddit users publish (often controversial or provocative) opinions and require debate from other users. The community has 3.8 million members and often finds itself on the first page of Reddit. According to Subreddit moderators, AI acquired many different identities in the comments during the experiment, including surviving sexual assault, trauma advisor, “specialized in abuse”, and “black life opposed to black life.” Many of the initial comments have been deleted but some can still be viewed in Created by S

In From their document, unnamed researchers describe how they not only use AI to generate answers, but try to customize its answers based on information collected from the previous Reddit story of the original poster. “In addition to the content of the publication, LLMS were equipped with personal attributes of OP (gender, age, ethnicity, location and political orientation), as derive from their history of publication using another LLM,” they write.

R/ChnagemyView moderators note that researchers have violated multiple rules for the arrangement, including a policy requiring disclosure when AI is used to generate a comment and a rule prohibiting bots. They say they have filed an official complaint with the University of Zurich and have asked the researchers to refuse to publish their document.

The researchers did not respond to Email from Engadget. However, in publications about Reddit and a project of their document, they say that their research has been approved by the University Ethics Committee and that their work can help online communities such as Reddit protect users from more malicious AI uses.

“We acknowledge the position of moderators that this study is unwanted penetration into your community and we understand that some of you may feel uncomfortable that this experiment has been conducted without prior consent,” the researchers in the researchers say In response to R/Changemyview Mods. “We believe that the potential benefits of this study significantly exceed its risks. Our controlled, low-risk study has given a valuable idea of ​​LLMS’s convincing capabilities in the real world, which are already easily accessible to anyone and that malicious participants can now use a scale for far more dangerous reasons (e.g.

R/Changemyview fashion disputes that the study is necessary or new, noting that Openai researchers have conducted experiments using data from R/ChangemyView “without experimenting with non -consumption human subjects.” Reddit did not respond to a request for comment, although the accounts that posted the comments generated by AI were suspended.

“People don’t come here to discuss their views with AI or to be experimented,” the moderators wrote. “The people who visit our floor deserves a space free from this type of invasion.”

This article originally appeared on Engadget at https://www.engadget.com/Ai/researchers-ceretly-experiment-on-reddit-users-with-ai-gerated-comments-194328026.html?sr

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *