Chatgpt struck a confidential complaint because of defamatory hallucinations

Rate this post


Openai faces another complaint of privacy in Europe because of the trend of its viral AI chatbot to hallucinate false information – and this one can be complicated to ignore the regulators.

Privacy Policy Protection Group Noyb supports a person in Norway who was terrified to find Chatgpt A returning fictional information, which claims to have been convicted of killing two of his children and tries to kill the third.

More award confidentiality complaints about Chatgpt generating incorrect personal data involves problems such as Incorrect date of birth or biographical details that are wrongS One concern is that Openai does not offer a way for people to correct incorrect information that AI generates about them. Usually Openai suggests blocking answers for such prompts. However, under the General Data Protection Regulation (GDPR) of the European Union, Europeans have a set of access to data that include the right to correct personal data.

Another component of this Data Protection Act requires data controllers to make sure that the personal data they produce for individuals are accurate – and this is a problem that NoyB qualifies with its most complaining of Chatgpt.

“GDPR is clear. Personal data should be accurate,” says Joachim Soderberg, a NoyB Data Protection Law. “If not, consumers have the right to change it to reflect the truth. Showing Chatgpt users a little withdrawal that chatbot can make mistakes obviously not enough. You can’t just spread incorrect information and ultimately add a small refusal of responsibility, saying that everything you said may not be true.”

Confirmed GDPR violations can lead to 4% of the global annual turnover.

Performance may also require changes in AI products. More special, early GDPR intervention by the Observer for data protection in Italy, which shows that Chatgpt Access is temporarily blocked in the country in the country in Spring 2023 LED Openai to make changes to the information it reveals to users, for example. Thehe The guard subsequently continued to fined EUR 15 million EUR 15 million to process people’s data without the right legal basis.

However, it has been honest to say that the observations of confidentiality in Europe have adopted a more preferable approach to Genai while trying to do so Find out how best to apply GDPR to these buzzing AI toolsS

Two years ago, the Data Protection Commission in Ireland (DPC) – which plays a leading role in applying GDPR on a previous Noyb Chatgpt complaint – called to rush to a ban Genai tools, for example. This suggests that the regulators should take the time instead to understand how the law applies.

And it is noticeable that a confidentiality complaint against Chatgpt, which is investigated by the Poland supervisory guard September 2023 He has not made a solution yet.

Noyb Chatgpt’s new complaint seems to shake the regulators of confidentiality awake when it comes to the dangers of AIS hallucination.

Non -profit purpose shared (below) screenshot with TechCrunch, which shows interaction with Chatgpt, in which AI answers a question that asks “Who is the Arve Holman Holmen?” – The name of the individual who bears the complaint – by creating a tragic fiction that falsely declares that he has been convicted of killing children and sentenced to 21 years in prison for killing two of his own sons.

Although the defamatory claim that Hjalmar Holmen is a killer of children is completely incorrect, Noyb notes that Chatgpt’s response includes some truths because the individual in question has three children. The chatbot also made the gender of his children correctly. And his hometown is correctly called. But it just makes even more surprising and disturbing that AI hallucins such scary lies on top.

A Noyb spokesman said they were not able to determine why a chatbot produces such a specific but false story for that individual. “We did research to make sure it was not just a mix with another person,” said the spokesman, noting that they have looked at the newspaper archives, but have failed to find an explanation of why AI’s child made.

Large language models For example, the one who essentially makes a further forecasting of words on a huge scale, so we can speculate that data sets used to train the instrument contain many stories of philicides that influenced the choice of words in response to a name request.

Whatever the explanation is, it is clear that such results are completely unacceptable.

Noyb’s claim is that they are illegal under EU data protection rules. And while Openai shows a tiny refusal of responsibility at the bottom of the screen, which states that “Chatgpt can make mistakes. Check important information”, it is said that this cannot release AI developer to its obligation under GDPR not to create outrage for people in the first place.

OPENAI is related to the response to the complaint.

While this GDPR complaint refers to a specified individual, Noyb points out other cases of Chatgpt, making legitimately compromising information – such as the Australian Major who said he is he is involved in a bribe and corruption scandal or a German journalist who has been appointed fake as a violent of children – Saying it is clear that this is not an isolated problem for the AI ​​tool.

One Important Thing to Note Is That, Following An Update To The Underlying Ai Model Powering Chatgpt, Noyb Says The Chatbot Stopped Producting The Dangerous Falsehods About Hjalmar Holme Searching the Internet for Information About People When Asked Who They Are (Whereas Prevously, A Blank in Its Data Set Could, Presumby, Have Encouraged IT to Hallucine SUCH SUCH

In our own tests, asking Chatgpt “Who is ARVE HDHALMAR HOLMEN?” Initially, Chatgpt responded with a slightly strange combo, showing some pictures of different people obviously obtained from sites, including Instagram, SoundCloud and Discogs, along with a text that claims that “can’t find information” to a person of that name (see our screen below). A second attempt showed an answer that identified Arve Hdjalmar Holmen as “Norwegian musician and songwriter” whose albums include “Honky Tonk Inferno”.

Chatgpt Shot: Natasha Lomas/TechCrunch

As long as it seems that the hazardous lies generated by Chatgpt for Hjalmar Holmen seem to have stopped, both Noyb, and Hjalmar Holmen remain concerned that incorrect and defamatory information about it can be kept within the AI ​​model.

“Adding a refusal of responsibility that you do not obey the law does not cause the law to disappear,” notes in a statement, Cleanti Sardelli, another data protection lawyer at NOYB. “AI companies also can’t just” hide “false information from users while they still process incorrect information.”

“AI companies have to stop acting as if GDPR does not apply to them when it is clear,” she added. “If hallucinations are not stopped, people can easily suffer reputation.”

Noyb lodged the appeal against Openai before the Norwegian data protection body – and hopes that the guard will decide that he is competent to investigate, since OYB is aimed at a complaint in the US Openii, arguing that his office in Ireland is not only responsible for the sake of the sake of the sake of the sake of the score in Ireland.

However an earlier Noyb’s GDPR complaint against Openai, which was filed in Austria in April 2024was directed by the Ireland DPC regulator at the expense of A change made by Openai earlier that year To name your Irish division as a Chatgpt service provider to regional users.

Where is this complaint now? He is still sitting at a desk in Ireland.

“After receiving the complaint from the Austrian supervisory authority in September 2024, the DPC has started official processing of the complaint and it is still ongoing,” TechCrunch told him when he asked him to update.

He did not propose any warrior when the DPC investigation is expected to be completed on Chatgpt hallucinations.

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *