Google is moving forward with Pentagon AI deal despite employee opposition


Google has reportedly signed an agreement allowing the US Department of Defense to use its site Artificial intelligence models For classified business, though there is an open letter from hundreds of employees urging the company to stay away from military uses that they say could become dangerous or impossible to supervise.

deal, Reported earlier Tuesday by The InformationThe Pentagon is allowed to use Google’s AI tools for “any legitimate government purpose,” including sensitive military applications. Google joins OpenAI and XAI, which also has similar AI confidentiality agreements with the Pentagon.

The reported agreement includes language stating that Google’s AI system is not intended for local mass surveillance or autonomous weapons without proper human oversight. But it also says that Google has no right to control or veto legal government operational decisions, according to the British Daily Mail. Reports. Google will also help adjust security settings and filters at the government’s request.

A Google spokesperson told CNET in an emailed statement that the company remains committed to the position that AI should not be used for domestic mass surveillance or autonomous weapons without human oversight, and said that providing API access to commercial models under standard practices is a “responsible approach” to supporting national security.

The Pentagon declined to comment to CNET.

The deal falls amid domestic backlash. in An open letter to CEO Sundar Pichaimore than 600 Google employees asked the company to “refuse to make our AI systems available for confidential workloads.” The employees wrote that because they work in close proximity to the technology, they have a responsibility to highlight and prevent its “most dangerous and unethical uses.”

“We want to see AI benefiting humanity, not to see it used in inhumane or deeply harmful ways,” the letter said. Employees said their concerns include autonomous lethal weapons and mass surveillance, but extend beyond those examples because covert action can occur without employees knowing or being able to stop it.

This tension reflects one of the most prominent internal revolutions that Google has witnessed. In 2018, Thousands of workers protested against Project Mavena Pentagon program that involves artificial intelligence analysis of drone footage. He chose Google later By not renewing this contract.

The company’s stance on military and national security AI has changed since then.

Last year, Google removed earlier language from its AI principles that said it would not pursue technologies likely to cause public harm, certain weapons or surveillance technologies, or systems that violate widely accepted principles of human rights and international law.

In a February blog post for Google AI Principles updateGoogle DeepMind CEO Demis Hassabis and Senior Vice President James Manyika books That “democracies must lead the development of AI” and that companies and governments must work together to build AI that “protects people, promotes global growth, and supports national security.”

For Google insiders opposed to the deal, the concern is not only that AI could be used by the military, but that this secret deployment removes the usual view of how the model could be used.

“I feel extremely ashamed,” Andreas Kirsch, a researcher at Google DeepMind, wrote in an article. Public post on X In response to the reported deal.

The open letter from Google employees ends with a direct appeal to Google’s CEO: “Today, we call on you, Sundar, to act on the values ​​this company was built on, and to reject secret workloads.”



Leave a Reply

Your email address will not be published. Required fields are marked *