{"id":824710,"date":"2025-02-06T00:11:47","date_gmt":"2025-02-06T06:11:47","guid":{"rendered":"https:\/\/newsycanuse.com\/index.php\/2025\/02\/06\/google-lifts-a-ban-on-using-its-ai-for-weapons-and-surveillance\/"},"modified":"2025-02-06T00:11:47","modified_gmt":"2025-02-06T06:11:47","slug":"google-lifts-a-ban-on-using-its-ai-for-weapons-and-surveillance","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2025\/02\/06\/google-lifts-a-ban-on-using-its-ai-for-weapons-and-surveillance\/","title":{"rendered":"Google Lifts a Ban on Using Its AI for Weapons and Surveillance"},"content":{"rendered":"<div data-journey-hook=\"client-content\" data-testid=\"BodyWrapper\">\n<figure><\/figure>\n<p><span>Google announced Tuesday<\/span> that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue \u201ctechnologies that cause or are likely to cause overall harm,\u201d \u201cweapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,\u201d \u201ctechnologies that gather or use information for surveillance violating internationally accepted norms,\u201d and \u201ctechnologies whose purpose contravenes widely accepted principles of international law and human rights.\u201d<\/p>\n<p>The changes were disclosed in <a data-offer-url=\"https:\/\/blog.google\/technology\/ai\/ai-principles\/\" data-event-click=\"{\"element\":\"ExternalLink\",\"outgoingURL\":\"https:\/\/blog.google\/technology\/ai\/ai-principles\/\"}\" href=\"https:\/\/blog.google\/technology\/ai\/ai-principles\/\" rel=\"nofollow noopener\" target=\"_blank\">a note appended<\/a> to the top of a 2018 blog post unveiling the guidelines. \u201cWe\u2019ve made updates to our AI Principles. Visit AI.Google for the latest,\u201d the note reads.<\/p>\n<p>In <a data-offer-url=\"https:\/\/blog.google\/technology\/ai\/responsible-ai-2024-report-ongoing-work\/\" data-event-click=\"{\"element\":\"ExternalLink\",\"outgoingURL\":\"https:\/\/blog.google\/technology\/ai\/responsible-ai-2024-report-ongoing-work\/\"}\" href=\"https:\/\/blog.google\/technology\/ai\/responsible-ai-2024-report-ongoing-work\/\" rel=\"nofollow noopener\" target=\"_blank\">a blog post on Tuesday<\/a>, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the \u201cbackdrop\u201d to why Google\u2019s principles needed to be overhauled.<\/p>\n<p>Google first published the principles in 2018 as it moved to quell internal protests over the company\u2019s decision to work on a US military <a href=\"https:\/\/www.wired.com\/story\/3-years-maven-uproar-google-warms-pentagon\/\">drone program<\/a>. In response, it declined to <a href=\"https:\/\/www.wired.com\/story\/google-wont-renew-controversial-pentagon-ai-project\/\">renew the government contract<\/a> and also announced <a href=\"https:\/\/www.wired.com\/beyond-the-beyond\/2018\/06\/googles-ai-principles\/\">a set of principles<\/a> to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights.<\/p>\n<p>But in an announcement on Tuesday, Google did away with those commitments. <a data-offer-url=\"https:\/\/ai.google\/responsibility\/principles\/\" data-event-click=\"{\"element\":\"ExternalLink\",\"outgoingURL\":\"https:\/\/ai.google\/responsibility\/principles\/\"}\" href=\"https:\/\/ai.google\/responsibility\/principles\/\" rel=\"nofollow noopener\" target=\"_blank\">The new webpage<\/a> no longer lists a set of banned uses for Google\u2019s AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement \u201cappropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.\u201d Google also now says it will work to \u201cmitigate unintended or harmful outcomes.\u201d<\/p>\n<p>\u201cWe believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,\u201d wrote James Manyika, Google senior vice president for research, technology, and society, and Demis Hassabis, CEO of Google DeepMind, the company\u2019s esteemed AI research lab. \u201cAnd we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.\u201d<\/p>\n<p>They added that Google will continue to focus on AI projects \u201cthat align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.\u201d<\/p>\n<p>Multiple Google employees expressed concern about the changes in conversations with WIRED. \u201cIt&#8217;s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war,\u201d says Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.<\/p>\n<hr>\n<h2><strong>Got a Tip?<\/strong><\/h2>\n<p><strong>Are you a current or former employee at Google? We\u2019d like to hear from you. Using a nonwork phone or computer, contact Paresh Dave on Signal\/WhatsApp\/Telegram at +1-415-565-1302 or <a href=\"http:\/\/www.wired.com\/mailto:pa*********@***ed.com\" data-original-string=\"LsznP1+Y0uom4h2O0ETs2Q==7f41pcvF+pumdR0Jf2jZhrpbDRGFBaF17ZY7vsO8NkXkQs=\" title=\"This contact has been encoded by Anti-Spam by CleanTalk. Click to decode. To finish the decoding make sure that JavaScript is enabled in your browser.\"><span \n                data-original-string='rScqk+tTTFKHFio\/ySJaEw==7f4KqnXQCAWA6dbAasxzg1U47Ie9vuXzLP87pN8jUQThH4='\n                class='apbct-email-encoder'\n                title='This contact has been encoded by Anti-Spam by CleanTalk. Click to decode. To finish the decoding make sure that JavaScript is enabled in your browser.'>pa<span class=\"apbct-blur\">*********<\/span>@<span class=\"apbct-blur\">***<\/span>ed.com<\/span><\/a>, or Caroline Haskins on Signal at +1 785-813-1084 or at <a href=\"http:\/\/www.wired.com\/mailto:em******************@***il.com\" data-original-string=\"8hOPAJq1090rlyup5Jn9OQ==7f4w70JXNGhSRcLjULfFiZ\/gsWvKDdimEa\/XpUYKyh00xE=\" title=\"This contact has been encoded by Anti-Spam by CleanTalk. Click to decode. To finish the decoding make sure that JavaScript is enabled in your browser.\"><span \n                data-original-string='JREOz7bgPN3Md\/eEqQk0IA==7f4UFtAiT09xKIM54nfAn0fSVRGqzc1+zP2acF5DUCwr98='\n                class='apbct-email-encoder'\n                title='This contact has been encoded by Anti-Spam by CleanTalk. Click to decode. To finish the decoding make sure that JavaScript is enabled in your browser.'>em<span class=\"apbct-blur\">******************<\/span>@<span class=\"apbct-blur\">***<\/span>il.com<\/span><\/a><\/strong><\/p>\n<hr>\n<p>US President Donald Trump\u2019s return to office last month has galvanized many companies <a href=\"https:\/\/www.wired.com\/story\/meta-2024-earnings-dei-trump\/\">to revise policies promoting equity and other liberal ideals<\/a>. Google spokesperson Alex Krasov says the changes have been in the works much longer.<\/p>\n<p>Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives. Gone are phrases such as \u201cbe socially beneficial\u201d and maintain \u201cscientific excellence.\u201d Added is a mention of \u201crespecting <a href=\"https:\/\/www.wired.com\/story\/ai-copyright-case-tracker\/\">intellectual property rights<\/a>.\u201d<\/p>\n<\/div>\n<div data-journey-hook=\"client-content\" data-testid=\"BodyWrapper\">\n<p>After the initial release of its AI principles roughly seven years ago, Google created two teams tasked with reviewing whether projects across the company were living up to the commitments. One focused on Google\u2019s core operations, such as search, ads, Assistant, and Maps. Another focused on Google Cloud offerings and deals with customers. The unit focused on Google\u2019s consumer business <a href=\"https:\/\/www.wired.com\/story\/google-splits-up-responsible-innovation-ai-team\/\">was split up early last year<\/a> as the company raced to develop chatbots and other generative AI tools to compete with OpenAI.<\/p>\n<p>Timnit Gebru, a former colead of Google\u2019s ethical AI research team who was <a href=\"https:\/\/www.wired.com\/story\/google-timnit-gebru-ai-what-really-happened\/\">later fired from that position<\/a>, claims the company\u2019s commitment to the principles had always been in question. \u201cI would say that it\u2019s better to not pretend that you have any of these principles than write them out and do the opposite,\u201d she says.<\/p>\n<p>Three former Google employees who had been involved in reviewing projects to ensure they aligned with the company\u2019s principles say the work was challenging at times because of the varying interpretations of the principles and pressure from higher-ups to prioritize business imperatives.<\/p>\n<p>Google still has language about preventing harm in its official Cloud Platform Acceptable <a href=\"https:\/\/cloud.google.com\/terms\/aup\">Use Policy<\/a>, which includes various AI-driven products. The policy forbids violating \u201cthe legal rights of others\u201d and engaging in or promoting illegal activity, such as \u201cterrorism or violence that can cause death, serious harm, or injury to individuals or groups of individuals.\u201d<\/p>\n<p>However, when pressed about how this policy squares with Project Nimbus\u2014a cloud computing contract with the Israeli government, which has benefited the country\u2019s <a href=\"https:\/\/www.wired.com\/story\/amazon-google-project-nimbus-israel-idf\/\">military<\/a> \u2014 Google <a href=\"https:\/\/www.wired.com\/story\/amazon-google-project-nimbus-israel-idf\/\">has said<\/a> that the agreement \u201cis not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.\u201d<\/p>\n<p>\u201cThe Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our <a href=\"https:\/\/cloud.google.com\/terms\">Terms of Service<\/a> and <a href=\"https:\/\/cloud.google.com\/terms\/aup\">Acceptable Use Policy<\/a>,\u201d Google spokesperson Anna Kowalczyk <a href=\"https:\/\/www.wired.com\/story\/amazon-google-project-nimbus-israel-idf\/\">told WIRED<\/a> in July.<\/p>\n<p>Google Cloud\u2019s <a href=\"https:\/\/cloud.google.com\/terms\">Terms of Service<\/a> similarly forbid any applications that violate the law or \u201clead to death or serious physical harm to an individual.\u201d Rules for some of Google\u2019s consumer-focused AI services also ban illegal uses and some potentially harmful or offensive uses.<\/p>\n<p><em>Update 2\/04\/25 5:45 ET: This story has been updated to include an additional comment from a Google employee.<\/em><\/p>\n<\/div>\n<p><a href=\"https:\/\/www.wired.com\/story\/google-responsible-ai-principles\/\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><br \/>\n Paresh Dave, Caroline Haskins<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue \u201ctechnologies that cause or are likely to cause overall harm,\u201d \u201cweapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,\u201d<\/p>\n","protected":false},"author":1,"featured_media":824711,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[496,2197,46],"tags":[],"class_list":{"0":"post-824710","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-google","8":"category-lifts","9":"category-technology"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/824710","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=824710"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/824710\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/824711"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=824710"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=824710"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=824710"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}