

Google LLC has made a major change to its AI Principles, taking out the part where it used to say it wouldn’t use the technology for surveillance or weapons applications.
The old policy, which the company first released in 2018, stated that it would not pursue any AI developments that were “likely to cause harm,” and it would not “design or deploy” AI tools that could be used for weapons or surveillance technologies. That section is now gone.
In its place, Google states the onus is on “responsible development and deployment.” This, says Google, will only be implemented with “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”
Google hasn’t denied there has been some amount of tinkering with the philosophy around AI. In a blog post written shortly before the end-of-year financial report, Google Senior Vice President James Manyika, and Demis Hassabis, head of AI lab Google DeepMind, said governments now need to work together to support “national security.”
The technology has “evolved,” since 20218, said the post, which it seems mean the principles needed some fine-tuning. “Billions of people are using AI in their everyday lives,” it said. “AI has become a general-purpose technology, and a platform which countless organizations and individuals use to build applications. It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself.”
The post added that global competition regarding AI has heated up in what the pair said was an “increasingly complex geopolitical landscape.” They said they believe “democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”
The change is reminiscent of the one that happened to Google’s motto. Founders Sergei Brin and Larry Page introduced the motto “Don’t be evil,” which was updated to “Do the right thing” in 2015.
The company has since treaded carefully where its ethics and technologies are concerned, dropping a U.S. Department of Defense contract for AI surveillance technology in 2018 after an outcry from its staff and the public. That was when Google introduced new guidelines for its use of AI in defense and intelligence contracts.
THANK YOU