- Google has released what is effectively a rulebook for how the company will apply AI technology
- The seven principles include making sure that AI is applied to applications that are socially beneficial, safe and won’t create unfair bias.
- This is Google’s response to a conflict inside the company over management’s controversial decision to build AI tools for the military. Last week, Gizmodo reported that Google cloud chief Diane Greene told employees that Google had decided to stop providing AI tech to the Pentagon.
Google CEO Sundar Pichai published a set of “ethical principles” on Thursday that will govern the company’s work with artificial intelligence.
Pichai said that while AI technology provides consumers and businesses with many benefits, Google realises the tech “will have a significant impact on society for many years to come” and that the managers “feel a deep responsibility to get this right.”
Pichai said the AI applications will be screened to make sure that they are socially beneficial, won’t create unfair bias, are safe, accountable to people, incorporate design principles, consistent with scientific excellence, and be made available for uses that maintain the previous principles.
The principles follow a conflict inside Google, pitting thousands of employees against management. These workers protested the company’s involvement in Project Maven – the controversial collaboration between Google and the US Department of Defence. In March, news leaked that Google had quietly supplied AI technology to the Pentagon to help analyse drone video footage.
In April, more than 4,000 workers signed a petition demanding that Google’s management cease work on Project Maven and promise to never again “build warfare technology.” Soon after that, Gizmodo reported that a dozen or so Google employees had resigned in protest. Last Friday, Diane Greene, Google Cloud chief, reportedly told employees that Google had decided to stop working with the military on AI.
In his blog post, Pichai also made it clear on what sorts of applications that Google will not develop. Those include weapons or technologies that cause overall harm, as well as anything that can be used for surveillance that violates “internationally accepted norms” or anything that conflicts with “widely accepted priciples of international law and human rights.”
Pichai’s pledge regarding weapons was “really strong,” Peter Asaro, associate proifessor of media studies at the New School in New York, told Business Insider. Asaro coauthor a letter to Google’s management last month, signed by hundreds of academics and researchers, demanding that Google cease developing military technologies as well as calling for a ban on authonomous weapons. Asaro said however, Google’s commitment to not build applications that “gather or use information for surveillance” that violates international norms leaves too much wiggle room.
“Who and what dictates the norm?” Asaro asked, noting that such standards differ broadly around the world.
AI spooks many people. At the extreme, the fear is that one day we might see the kind of killer robots ofted depicted in Hollywood films. While that might seem far-fetched now, critics have said that Google’s video analysis of surveillance footage could help improve the accuracy of drone missile strikes.
And Gizmodo also reported that Google sought to help build systems that enabled the Pentagon to surveil entire cities.
What this means for Google’s bottomline is the potential loss of defence contracts. One, the Joint Enterprise Defence Infrasture or JEDI, is worth $US10 billion. But according to Bloomberg, Google was a longshot to win those contracts. Companies such as Oracle, Microsoft and IBM, which possessed far more experience working with the government, were considered the frontrunners for winning the deals.