OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk
Openai says it will stop appreciating its AI models before they release it to the risk that they can persuade or manipulate people, it is possible to choose the election or create highly effective propaganda campaigns.
The company says these risks will now affect its service conditions, limiting the use of its AI models in political campaigns and lobbying, and when people are released when they use models.
He also revealed that it would refuse to release AI models that judged “high risk” as long as it has taken appropriate steps to call it “critical risk.” Previously, Openai said she would not leave any AI model presented more than “average risk.”
Policy changes have been placed in Openai’s update “Framework ” Yesterday the details of that frame, how the company controls the AI models build it for possible catastrophic hazards. Everything is possible that models can support and avoid human control.
Policy changes are divided into AI security and security experts. Some have been conducting social media to Openai to relieve an updated circuit by noting improvements, such as simpler risk categories and emphasizing autonomous threat and maintenance.
However, others have worried, including Steven Adler, former Openai Security Researcher, who criticizes the fact that the updated circle no longer requires the safety tests of fine models. “Openai calmly reduces its security obligations” wrote in xA number of still stressed that it appreciates the efforts of Openai. “I am generally glad to update the scope of preparation,” he said. “It simply came to our notice then.”
Some critics stressed the importance of making framing addresses of readiness to make sure.
“Openai seems to change its approach,” said the leading in the Politics of “Rand” in “Rand” and the leading research team. “Instead of persuading, as a category of main risk, it can now apply or a higher level of social and regulatory problem or integrate into the current openai instructions on model development and use. It remains to see how it will play in such areas of politics, he added where AI compelling opportunities “are still a challenged issue.”
Courtney Radosh, Broking Senior Friend, Center for International Management Innovation and Center for Democracy and Technology Center, visited the message Fortune “Another example of the Hubris of Technology.” He stressed that “persuasion” the decision on “persuasion” ignores the context. For example, conviction can exist for individuals or low AI literacy and societies.
Oren ETJIO, the preliminary executive director of Allen A., which offers tools to fight against AI’s mannerate content, has also been concerned. “Decrease in deception prevents me as an error, taking into account the great convincing power of LLMs,” he said. “It simply came to our notice then.
However, one AI security researcher associated with Openai Fortune In order to reasonably seem to be just to reflect on misinformation or other malicious beliefs that use through the Terms of OPENAI. The researcher who asked to remain anonymously, because he was not allowed to publicly talk about the permission of his current employer, it is difficult to assess the risk of persuasive / manipulation in preliminary placement testing. In addition, he noted that this category of risks is more amorphous and ambiguous compared to other critical risks, such as AI to help someone Kichos in the cyberspace.
It is noteworthy that some members of the European Parliament have also expressed concern The latest draft of the EU ACT has also reduced the mandatory test of AI models to relieve the possibility that they can spread misinformation and disrupt democracy to voluntary discussion.
Studies have found that Ai Chatbots is very convincing, although this opportunity itself does not necessarily not dangerous. For example, the researchers of Cornell University and MIT, has been found These dialogues were effective in questioning people’s conspiracy theories.
Another review of the updated frame of Openai focused on a line where Openyan is located.
Non-commercial risks, including threats from AI systems, stated this statement of the future of the institute of life, Fortune that “the bottom race is accelerating. These companies are openly competing to manage artificial general intelligence-smarter than-human AI systems designed for our continued existence. “
“Mostly announcing that none of what AI security is carved in a stone,” said Gary Marcus, a long-term Openai critic. Connection A message who said the line preheates the race from the bottom. “What really rules for their decisions is competitive pressure?” Little by little, they ruled everything they were running for non-profit beneficiaries.
In general, it is useful that companies like Openai share their thinking about their risk management practice, open, Miranda Bogen, Director of the Emergency Situations Laboratory in the Center for Democracy and Technology. Fortune Too. By mail.
So to speak, he added that he was worried about moving the goalkeepers. “It will be a worrying trend if, as the AI systems seem to put separate risks, they put those risks into guidelines,” he said.
He also criticized the scope of “border” models when Openai and other companies have used the technical definitions of that period as the last, strong models of security assessments. (Eg Openai Issued Its 4.1 model yesterday without a security report, saying that it is not a model of the border.) In other cases, companies too Failed to publish security reports Or slowly to do it, publishing them a monthly, the published model.
“Among these types of problems and developing examples, where new models begin before or completely without documents that they have promised to release that voluntary obligations are still going on,” he said.
Update: April 16. This story has been updated, which includes the comments of future Max Tegmark of Life Institute.
This story was originally shown Fortune.com