{"id":622748,"date":"2023-03-28T09:49:11","date_gmt":"2023-03-28T14:49:11","guid":{"rendered":"https:\/\/news.sellorbuyhomefast.com\/index.php\/2023\/03\/28\/lords-ai-weapons-committee-holds-first-evidence-session\/"},"modified":"2023-03-28T09:49:11","modified_gmt":"2023-03-28T14:49:11","slug":"lords-ai-weapons-committee-holds-first-evidence-session","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2023\/03\/28\/lords-ai-weapons-committee-holds-first-evidence-session\/","title":{"rendered":"Lords AI weapons committee holds first evidence session"},"content":{"rendered":"<section id=\"content-body\">\n<p>The potential benefits of using artificial intelligence (AI) in weapons systems and military operations should not be conflated with better international humanitarian law (IHL) compliance, Lords have been told.<\/p>\n<p>Established 31 January 2023, the <a href=\"https:\/\/www.computerweekly.com\/news\/365532024\/Lords-Committee-to-investigate-use-of-AI-powered-weapons-systems\">House of Lords AI in Weapon Systems Committee<\/a> was set up to explore the ethics of developing and deploying autonomous weapons systems (AWS), including how they can be used safely and reliably, their potential for conflict escalation, and their compliance with international laws.<\/p>\n<p>Also known as lethal autonomous weapons systems (LAWS), these are weapons systems that can select, detect and engage targets with little or no human intervention.<\/p>\n<p>In its first evidence session on 23 March 2023, Lords heard from expert witnesses about whether the use of AI in weapon systems would improve or worsen compliance with IHL.<\/p>\n<p>Daragh Murray, a senior lecturer and IHSS Fellow at Queen Mary University of London School of Law, for example, noted there is \u201ca possibility\u201d that the use of AI here could improve compliance with IHL.<\/p>\n<p>\u201cIt can take a lot more information into account, it doesn\u2019t suffer from fatigue, adrenaline or revenge, so if it\u2019s designed properly, I don\u2019t see why it couldn\u2019t be better in some instances,\u201d he said.<\/p>\n<p>\u201cFor me, the big stumbling block is that we tend to approach an AI systems from a one-size- fits-all perspective where we expect it to do everything, but if we break it down in certain situations \u2013 maybe identifying an enemy tank or responding to an incoming rocket \u2013 an AI system might be much better.\u201d<\/p>\n<p>However, he was clear that any accountability for an AI-powered weapon systems operation would have to lie with the humans who set the parameters of deployment.<\/p>\n<p>Georgia Hinds, a legal adviser at the International Committee of the Red Cross (ICRC), said that while that she understands the potential military benefits offered by AWS \u2013 such as increased operational speed \u2013 she would strongly caution against conflating these benefits with improved IHL compliance.<\/p>\n<p>\u201cSomething like [improved operational] speed actually could pose a real risk for compliance with IHL,\u201d she said. \u201cIf human operators don\u2019t have the actual ability to monitor and to intervene in processes, if they\u2019re accelerated beyond human cognition, it means that they wouldn\u2019t be able to prevent an unlawful or an unnecessary attack \u2013 and that\u2019s actually an IHL requirement.\u201d<\/p>\n<p>She added that arguments around AWS not being subject to rage, revenge, fatigue and the like lack the empirical evidence to back them up.<\/p>\n<p>\u201cInstead what we\u2019re doing is engaging in hypotheticals, where we compare a bad decision by a human operator against a hypothetically good outcome that results from a machine process,\u201d she said.<\/p>\n<p>\u201cI think there are many assumptions made in this argument, not least of which is that humans necessarily make bad decisions, [and] it ultimately ignores the fact that humans are vested with the responsibility for complying with IHL.\u201d<\/p>\n<p>Noam Lubell, a professor at Essex Law School, agreed with Hinds and questioned where the benefits of military AI would accrue.<\/p>\n<p>\u201cBetter for whom? The military side and the humanitarian side might not always see the same thing as being better,\u201d he said. \u201cSpeed was mentioned but accuracy, for example, is one where I think both sides of the equation \u2013 the military and the humanitarian \u2013 can make an argument that accuracy is a good thing.\u201d<\/p>\n<section data-menu-title=\"Precision weapons debate\">\n<h3><i data-icon=\"1\"><\/i>Precision weapons debate<\/h3>\n<p>Lubell noted a similar debate has played out over the past decade in relation to the use of \u201cprecision weapons\u201d like drones \u2013 the use of which was <a href=\"https:\/\/www.thebureauinvestigates.com\/stories\/2017-01-17\/obamas-covert-drone-war-in-numbers-ten-times-more-strikes-than-bush\">massively expanded under the Obama administration<\/a>.<\/p>\n<p>\u201cYou can see that on the one hand, there\u2019s an argument being made: \u2018There\u2019ll be less collateral damage, so it\u2019s better to use them\u2019. But at the same time, one could also argue that has led to carrying out military strikes in situations where previously it would have been unlawful because there would be too much collateral damage,\u201d he said.<\/p>\n<p>\u201cNow you carry out a strike because you feel you\u2019ve got a precision weapon, and there is some collateral damage, albeit lawful, but had you not had that weapon, you wouldn\u2019t have carried out the strike at all.\u201d<\/p>\n<p><a href=\"https:\/\/www.computerweekly.com\/news\/365532126\/AI-interview-Elke-Schwarz-professor-of-political-theory\">Speaking with Computer Weekly about the ethics of military AI<\/a>, professor of political theory and author of \u00a0<i><a href=\"https:\/\/manchesteruniversitypress.co.uk\/9781526114822\/\">Death machines: The ethics of violent technologies<\/a><\/i> Elke Schwarz made a similar point, pointing out that over a decade\u2019s worth of drone warfare has shown that more \u2018precision\u2019 does not necessarily lead to fewer civilian casualties, as the convenience enabled by the technology actually lowers the threshold of resorting to force.\u00a0<\/p>\n<p>\u201cWe have these weapons that allow us great distance, and with distance comes risk-lessness for one party, but it doesn\u2019t necessarily translate into less risk for others \u2013 only if you use them in a way that is very pinpointed, which never happens in warfare,\u201d she said, adding the effects of this are clear: \u201cSome lives have been spared and others not.\u201d<\/p>\n<p>On the precision arguments, Hinds noted that while AWS\u2019 are often equated with being more accurate, the opposite is true in the ICRC\u2019s view. \u00a0<\/p>\n<p>\u201cThe use of an autonomous weapon, by its definition, reduces precision because the user actually isn\u2019t choosing a specific target \u2013 they\u2019re launching a weapon that\u2019s designed to be triggered based on a generalised target profile, or a category of object,\u201d she said.<\/p>\n<p>\u201cI think the reference to precision hear generally relates to the ability to better hone in on a target and maybe to use a smaller payload, but that isn\u2019t tied specifically to the autonomous function of the weapons.\u201d<\/p>\n<\/section>\n<section data-menu-title=\"Human accountability\">\n<h3><i data-icon=\"1\"><\/i>Human accountability<\/h3>\n<p>Lubell said, in response to a Lords question about whether it would ever be appropriate to \u201cdelegate\u201d decision-making responsibility to a military AI system, that we are not talking about <em>Terminator<\/em>-style scenario where an AI sets its own tasks and goes about achieving them, and warned against anthropomorphising language.<\/p>\n<p>\u201cThe systems that we\u2019re talking about don\u2019t decide, in that sense. We\u2019re using human language for a tool \u2013 it executes a function but it doesn\u2019t make a decision in that sense. I\u2019m personally not comfortable with the idea that we\u2019re even delegating anything to it,\u201d he said.<\/p>\n<p>\u201cThis is a tool just like any other tool, all weapons are tools, we\u2019re using a tool\u2026there are solutions to the accountability problem that are based on understanding that these are tools rather than agents.\u201d<\/p>\n<p>Murray said he would also be very hesitant to use the word \u2018delegate\u2019 in this context: \u201cI think we have to remember that humans set the parameters for deployment. So I think the tool analogy is a really important one.\u201d<\/p>\n<p>Hinds also added that IHL assessments, particularly those around balancing proportionality with the anticipated military advantage, very much rely on value-judgement and context-specific considerations.<\/p>\n<p>\u201cWhen you recognise someone is surrendering, when you have to calculate proportionality, it\u2019s not a numbers game. It\u2019s about what is the military advantage anticipated,\u201d she said.<\/p>\n<p>\u201cAlgorithms are not good at evaluating context, they\u2019re not good at rapidly changing circumstances, and they can be quite brittle. I think in those circumstances, I would really query how we\u2019re saying that there would be a better outcome for IHL compliance, when you\u2019re trying to codify qualitative assessments into quantitative code that doesn\u2019t respond well to these elements.\u201d<\/p>\n<p>Ultimately, she said IHL is about \u201cprocesses, not results\u201d, and that \u201chuman judgement\u201d can never be outsourced.<\/p>\n<\/section>\n<section data-menu-title=\"AI for general military operations\">\n<h3><i data-icon=\"1\"><\/i>AI for general military operations<\/h3>\n<p>All witnesses agreed that narrowly looking at the role of AI in weapons systems would fail to fully account for the other ways in which AI could be deployed militarily and contribute to use of lethal force, and said they were particularly concerned about the use of AI for intelligence and decision-making purpsoes.<\/p>\n<p>\u201cI wouldn\u2019t limit it to weapons,\u201d said Lubell. \u201cArtificial intelligence can play a critical role in who or what ends up being targeted, even outside of a particular weapon.\u201d<\/p>\n<p>Lubell added he is just as concerned, if not more, about the use of AI in the early intelligence analysis stages of military operations, and how it will affect decision-making.<\/p>\n<p>Giving the example of <a href=\"https:\/\/www.computerweekly.com\/news\/365531455\/Mock-crime-prediction-tool-profiles-MEPs-as-criminals\">AI in law enforcement<\/a>, which has been shown to further entrench existing patterns of discrimination in the criminal justice system due to the use of historically biased policing data, Lubell said he is concerned \u201cthose problems repeating themselves when we\u2019re using AI in the earlier intelligence analysis stages [of military planning]\u201d.<\/p>\n<p>The Lords present at the session took this on board and said that they would expand the scope of their inquiry to look at the use of AI throughout the military, and not just in weapon systems specifically.<\/p>\n<\/section>\n<\/section>\n<p><a href=\"https:\/\/www.computerweekly.com\/news\/365533962\/Lords-AI-weapons-committee-holds-first-evidence-session\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><br \/>\n Tama Motsinger<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The potential benefits of using artificial intelligence (AI) in weapons systems and military operations should not be conflated with better international humanitarian law (IHL) compliance, Lords have been told. Established 31 January 2023, the House of Lords AI in Weapon Systems Committee was set up to explore the ethics of developing and deploying autonomous weapons<\/p>\n","protected":false},"author":1,"featured_media":622749,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32078,46,4364],"tags":[],"class_list":{"0":"post-622748","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-lords","8":"category-technology","9":"category-weapons"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/622748","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=622748"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/622748\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/622749"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=622748"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=622748"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=622748"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}