Google now thinks it is good to use AI for weapons and observation

Rate this post


Google made one of the most important changes in his AI principles After their first publication in 2018 in a change noticed by The Washington PostSearch giant edits the document to remove the promises that he promised that he would not "design or deployment" AI tools for use in weapons or monitoring technology. Previously these guidelines included a section entitled "apps we will not pursue," which is not present in the current version of the document.

Instead, there is a section now entitled "Responsible development and implementation." There, Google says it will be realized "Suitable human supervision, proper verification and feedback mechanisms to be aligned with consumer goals, social responsibility and widely accepted principles of international law and human rights."

This is a far wider commitment than the specific that the company has made recently late last month, when the previous version of its AI principles was still live on its website. For example, as it refers to weapons, the company earlier stated that it would not design AI for use in "weapons or other technologies whose main purpose or implementation is to cause or facilitate people’s injury directly. “As for AI monitoring tools, the company said it would not develop technologies to break "Internationally adopted norms."

Screen photo showing the previous version of the AI ​​principles of Google.
Google

When asked for a comment, a Google speaker pointed to Engadget to Blog post The company published on Thursday. In it, DeepMind CEO Demis Hasabis and James Mastic, Senior Vice President of Research, Laboratories, Technology and Society on Google, say the emergence of AI as A A as A "General purpose technology" required a change in policy.

"We believe that democracies must lead to the development of AIs, guided by basic values ​​such as freedom, equality and respect for human rights. And we believe that companies, governments and organizations that share these values ​​must work together to create AI that protects people, encourages global growth and supports national security," The two wrote. "… Guided by our AI principles, we will continue to focus on AI research and applications that are aligned with our mission, our scientific focus and our areas of expertise and will remain in accordance with the widely accepted principles of international law and Human rights – always evaluates specific specific work by carefully assessing whether the benefits significantly exceed the potential risks."

When Google first publishes its AI principles in 2018, he did it after MAVEN projectS This was a controversial state agreement that, if Google decided to renew it, would see that the company was providing AI software to the Ministry of Defense for the analysis of drone staff. Dozens of Google employees leave the company in protest of the treaty, with thousands more signing a petition in oppositionS When Google ultimately publishes his new guidelines, Chief Executive Officer Sundar Pichai told staff that his hope is he would withstand "The time test."

By 2021, however, Google began to pursue military treaties again with what is reported to be "aggressive" offer For the Pentagon Municipal Treaty. At the beginning of this year, The Washington Post reported that Google employees have repeatedly worked with Israel’s Defense Ministry Expand the use of AI Government ToolsS

This article originally appeared on Engadget at https://www.engadget.com/ai/google-now-monks-it-ok-O -use-ai-for-weapons-and-surveillance-224824373.html RSS

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *