The DeepMind’s 145 Page Document for AGI safety may not convince skeptics
Google DeepMind on Wednesday posts Comprehensive paper By its approach of safety to AGI, approximately defined as AI, which can accomplish any task that a person can.
AGI is a slightly controversial topic in the AI ​​field, with Nobody Assuming this is a little more than the dream of the pipe. Others including Main AI laboratories such as anthropicWarning that it is around the corner and can lead to catastrophic damage if no steps are taken to apply appropriate precautions.
The DeepMind 145 Page Document, which co -co -founded Deepmind Shane Legg co -founder, predicts that AGI could arrive by 2030 and that this could lead to what the authors call “serious harm”. The document does not specifically determine this, but gives an alarmist example of “existential risks” that “constantly destroy humanity”.
“(We anticipate) the development of an exceptional AGI before the end of the current decade,” the authors wrote. “The exceptional AGI is a system that has the ability to match at least 99 and percentiles of qualified adults with a wide range of non-physical tasks, including metacognitive tasks such as learning new skills.”
Outside the bat, the paper contrasts with Deepmind treatment by reducing AGI risk with anthropic and openai. According to him, he, in his opinion, puts a less emphasis on “stable training, monitoring and security”, while Openai is excessively bulls of “automation” of AI survey for safety, known as a research of alignment.
The document also questions the viability of Superintelligent AI – AI, which can do jobs better than any person. (Openai has recently been claimed That she directs her target from AGI to superintelligent.) Not having “significant architectural innovations”, Deepmind authors are not convinced that Superintelligent Systems will soon appear – if at all.
However, the document is plausible that current paradigms will allow “recursive improvement of AI”: a positive feedback cycle in which AI conducts its own AI survey to create more complex AI systems. And this can be incredibly dangerous, the authors say.
At a high level, the document offers and advocates the development of techniques for blocking the access of bad participants to hypothetical AGI, improving the understanding of the actions of AI systems and “hardening” of the AI ​​environments. He acknowledges that many techniques are nascent and have “open research problems”, but warns not to ignore the challenges to the safety of the horizon.
“The transformative nature of AGI has potential for both incredible benefits and serious harm,” the authors write. “As a result, in order to build AGI responsibly, it is crucial to the FRONTIER AI developers to plan to actively mitigate serious harm.”
However, some experts do not agree with the paper premises.
Heidi Church, a major scientist of the Non -Profit Institute now, told TechCrunch that he thought the concept of AGI was too poorly defined to be “strictly appreciated scientifically”. Another AI researcher, Matthew Guzdial, an assistant at the University of Alberta, said he did not believe that the recursive improvement of AI is realistic right now.
“(Recursive improvement) is at the heart of the arguments of the intelligence of sygliness,” Guzdial told TechCrunch, “But we have never seen evidence of what it works.”
Sandra Wachter, a researcher studying technology and regulation in Oxford, claims that it is a more realistic concern for AI to be strengthened with “inaccurate results”.
“With the distribution of generative AI exits on the Internet and the gradual replacement of authentic data, the models are now learning from their own results, which are pierced with irregular or hallucinations,” she told TechCrunch. “At this point, chatbots are used primarily for the purposes of searching and finding the truth. This means that we are constantly at risk of eating us with abuse and believing them because they are presented in many convincing ways.”
As maybe, the DeepMind document seems to be unlikely to arrange the debate about how realistic AGI is – and the AI ​​safety areas in the most efficient attention.