AI image generator exposed database reveals what people really use it for

Rate this post


In addition to CSAM, says Fowler, the database has been generated by AI pornographic images of adults plus potential images “Swap Face Swap”. Among the files, he was watching what seemed to be pictures of real people who were probably used to create “explicit naked or sexual -generated AI images,” he says. “So, they took real pictures of people and exchanged their faces there,” he said of some of the images generated.

When he was live, the Gennomis website resolved explicit AI images for adults. Many of the images presented on his homepage and the AI ​​Models section included sexualized images of women-somes were “photorealistic”, while others were fully generated by AI or in animated styles. It also includes the NSFW and Marketplace Gallery, where users can share images and potentially sell AI albums generated by AI. The tagol of the website said people can “generate” unlimited “images and videos; The previous version of the 2024 site can create “uncensored images”.

Gennomis’s consumer policies said only “respectful content” was allowed, saying that “explicit violence” and hate speech were prohibited. “Child pornography and any other illegal activities are strictly banned from Gennomis,” said his community directions, saying that accounts published forbidden content would be terminated. (Researchers, defenders of victims, journalists, technology companies and others have largely eliminated the phrase “child pornography” in favor of CSAM over the last decade).

It is unclear to what extent Gennomis uses any tools or moderation systems to prevent or disable the creation of AI-generated CSAM. Some users posted on his “Community” page last year that they cannot generate images of people who have sex and that their prompts are blocked for non -sexual “dark humor”. Another account posted on the community page that should be addressed the NSFW content as “can be viewed by the federals.”

“If I was able to see these images with nothing more than the URL, it shows me that they do not take all the necessary steps to block this content,” said Fowler for the database.

Henry Ader, a DeepFake expert and founder of the Latent Space Consultant consulting, says that even if the creation of harmful and illegal content is not allowed by the company, branding the website – reference to “unlimited” Image and section “NSFW” – individual, that there may be “it may have a” bond.

Ajder says he is surprised that the English website is related to a South Korean entity. Last year, the country was struck by non -conncessive Deepfake “emergency“This is directed girlsBefore taking action to fight wave of abuse of deep sizesS Ajder says that it should be more exertion on all parts of the ecosystem, which allows to generate non -congenital images with the help of AI. “The more we see, the more it forces the issue of legislators, on technology platforms, on web hosting companies, on payments suppliers. All people who, in some form or another, consciously or otherwise – the most unknowingly – facilitate and allow it to happen,” he says.

Fowler says the database also exposes files that appear to include AI prompts. The data not included the user data such as login or user names, the researcher says. The screenshots of prompts show the use of words such as “tiny”, “girl” and references to sexual acts between family members. The prompts also contained sexual actions between celebrities.

“It seems to me that the technology was moving in front of any of the guidelines or controls,” says Fowler. “From a legal point of view, we all know that the explicit images of the child are illegal, but this does not prevent the technology from not being able to generate these images.”

As generative AI systems significantly improve how easy it is to create and change images in the last two years, an AI explosion has been observed by AI. “Web pages containing AI-generated material for sexual abuse of children are more than four times since 2023, and the photorealism of this horrifying content has also jumped into refinement, says Derek Ray-Hille, the Internet CEO (IWF).

Iwf has documented How the criminals are increasingly creating GSAM generated by AI and developing the methods they use to create it. “It is currently too easy for criminals to use AI to generate and distribute sexually explicit content to children on scale and at speed,” Ray-Hill says.

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *