Google Reverses Stance on AI Use: What It Means for Weapons and Surveillance

Admin

Google Reverses Stance on AI Use: What It Means for Weapons and Surveillance

Google updates its ethics policy to align AI development with international law and human rights.

Google has changed its commitment regarding the use of artificial intelligence. The tech giant has removed its previous pledge not to use AI for weapons or surveillance in its updated ethics policy.

Originally, Google promised not to develop AI technologies that could cause harm, including those that could be used for military purposes or invade privacy. This was part of its “AI Principles,” established to guide its use of technology responsibly.

In a new announcement, Google stated that it aims to develop AI in a way that aligns with “international law and human rights.” However, they no longer mention avoiding AI for weapons or surveillance.

In a blog post, Demis Hassabis, the head of Google DeepMind, and James Manyika, a senior VP, emphasized the importance of values like freedom and equality in AI development. They encouraged collaboration among governments and organizations to make AI beneficial for people and promote global progress.

This shift comes after Google faced backlash in 2018 for its involvement in the Pentagon’s Project Maven. Employees protested the company’s role in using AI to help the military, leading to resignations and a petition against the project. Consequently, Google decided not to renew its contract with the Department of Defense and later backed out of a valuable cloud computing deal.

The updated ethics policy was announced amid changes in the regulatory landscape. Recently, political shifts have impacted oversight of AI technologies, raising concerns about safety and responsibility in AI development.



Source link

Economy, Technology, United States, US & Canada