The future of warfare lies not in drones that are remotely controlled by a pilot, but in unmanned weapon systems that can independently acquire, track and engage targets.
In fact, this future has been a reality since at least the 1980s, in one respect or another. Weapon systems such as the Phalanx Close-In Weapon System, the Aegis Weapon System and the Iron Dome Weapon System detect incoming threats and react to them without requiring a human to pull the trigger.
But there is a difference between these types of mechanised responses and autonomous weapon systems that are able to select and analyse a target, and decide whether or not to attack it.
The latter are the subject of this article, which proceeds in three parts to explain what autonomous weapons are, what issues they raise at international law, and what they may mean for the future of war.
What are autonomous weapon systems?
In 2013, as part of a test mission, an Air Force B-1 bomber deployed a Long Range Anti-Ship Missile (LRASM) over Point Mugu, off the coast of California. Although pilots initially directed the LRASM, the weapon entered its autonomous mode half way through its voyage. Without any further human intervention, it analysed three possible ships before selecting one to attack.
Weapon systems with some level of autonomy are already being used, and may be considered for deployment by Australia by the mid-2020s (Defence White Paper at 2.81). Autonomy is a matter of degree, but the LRASM evidently displays a high level of it. It is different from the defensive systems described above, which react on the basis of pre-programmed rules to intercept incoming threats. We know precisely what the Iron Dome will do to an incoming missile. Autonomous weapon systems, on the other hand, behave in a way that is not exactly predictable.
What else we know about autonomous weapon systems is mostly hypothetical. Their use for lethal force is banned by the US Department of Defense up to 2022 (US Department of Defense, Directive Number 3000.09: Autonomy in Weapons Systems at [4.c.(3)] albeit with some exceptions [4.d.]). But we do know that they will not be silver screen, silver-boned killer robots from the future. A definition offered by the US (Directive Number 3000.09, Part II: Definitions) explains that these are systems that ‘once activated can select and engage targets without further intervention by a human operator [emphasis added].’ There is necessarily some human interference.
What human interference does not do, however, is the legally significant act of selecting and engaging a target. Where that act is not subject to meaningful human control, including where there is an override function but the response happens so quickly that it would be impossible for a human operator to keep up, the weapon may be considered autonomous.
How to regulate autonomous weapons systems?
According to a report by the Special Rapporteur on Extrajudicial, Summary and Arbitrary Execution to the UN Human Rights Council, such weapons should meet international standards before even considering them for deployment (Christof Heyns, Report of the Special Rapporteur on Extrajudicial, Summary & Arbitrary Execution). As yet, there are no specific treaties dealing with autonomous weapon systems, but per Article 2(b) of the Additional Protocol I to the Geneva Conventions (API), generally recognised principles and international humanitarian law (IHL) continue to apply.
Article 36 of the API requires states to determine whether new weapons are prohibited under international law. That determination requires consideration of two further API articles: article 35(2), which prohibits weapons causing unnecessary suffering or superfluous injury, and article 51(4)(b), which prohibits inherently indiscriminate weapons (for a more in-depth look at how these provisions affect autonomous weapons, see Kenneth Anderson & Matthew C. Waxman, ‘Law and Ethics for Autonomous Weapon Systems: Why A Ban Won’t Work and How the Laws of War Can’, Stanford University, The Hoover Institution (Jean Perkins Task Force on National Security and Law Essay Series), 2013). Autonomous weapon systems tend to offer a new method of delivering existing weaponry, including bombs and bullets, so they are unlikely to be the subject of a blanket ban in this regard. However, the use of such weapons may still contravene IHL if the weapons are incapable of exercising the principles of proportionality and distinction (International Committee of the Red Cross, Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects 75).
Proportionality demands the balancing of military advantage against civilian injury. Assessment of a target’s worth is typically carried out on-scene by a commander who makes a judgment call. It does not adhere to a system of precedent, or a rigid ratio, so programming a weapon to make such an assessment may be difficult, particularly as that assessment may change from minute to minute based on new intelligence.
Distinction forbids the targeting of persons who are not directly taking part in the hostilities, and although autonomous weapon systems can be fitted with advanced sensors to process biometric data, they may not be able to account for the difficult and fluid line between civilians and combatants (Peter Asaro, ‘On Banning Autonomous Weapon Systems‘; the International Committee of the Red Cross has released an entire guide to interpreting what direct participation in hostilities means, see Nils Mezler, Interpretive Guidance on the Notion of Direct Participation in Hostilities Under International Humanitarian Law). Civilians can become legal targets if they take up arms, which sensory equipment may be able to process, but also if they perform acts to assist military operations without actually carrying a weapon. Likewise, combatants may or may not become illegal targets if they are hors de combat due to injury, but that depends on the severity of the injury.
Are automated weapon systems capable of following the law? In fact, some argue that a properly programmed weapon system will follow the law perfectly (Marco Sassoli, ‘Autonomous Weapons: Potential Advantages for the Respect of International Humanitarian Law’). It will not react in anger or panic, seek revenge or withhold information concerning its own conduct. Human soldiers do not always exercise complete compliance with IHL. Can machines do so perfectly? And if they cannot, and can only comply to the same imperfect level as humans, is that good enough?
And what if programming fails? Assigning liability is a challenge. A machine cannot be convicted of war crimes. The prosecution of developers and manufacturers is unlikely – as a preliminary bar, IHL only applies once hostilities have begun. Weapons developed in the lead up to war, or during peace, fall outside of the temporal coincidence required (Tim McFarland and Tim McCormack, ‘Mind the Gap: Can Developers of Autonomous Weapons Systems be Liable for War Crimes?’ 372). Those who procured the weapon may face the same challenge. Even if they did not, should they really hold legal responsibility? It would also be difficult under modes of liability to implicate a commander – if the weapon is autonomous to a degree that it selects and engages its own target, the commander may not have the requisite knowledge of pending criminal acts (Jack M. Beard, ‘Autonomous Weapons and Human Responsibilities’ (2014) 45 Georgetown Journal of International Law 647, 658). For command responsibility to apply, the principle would have to be modified. But is that level of culpability appropriate if the weapon behaved autonomously?
Why regulate autonomous weapons systems?
The real conceptual difficulty with autonomous weapon systems is not one for lawyers, but one for ethicists. Article 1(2) of the API, the so-called Martens Clause, states that in the absence of other agreements, we must be guided by the principles of humanity and public conscience. Does that humanity-guided decision-making involve moral and intuitive paths that are not algorithmic in nature?
Consider Mark Bowden’s widely read 2013 article in The Atlantic, ‘The Killing Machines’, which recounts the experience of a 19-year old drone operator. In 2013, when a truck began shooting at a patrol of marines in Afghanistan, he fired a Hellfire missile at the vehicle and destroyed it. Those marines were at war in Afghanistan. The drone operator was at an office building in the US. Months later, he was still bothered by delivering a ‘deathblow without having been in any danger’.
Of course, for militaries around the world, this is one of the most significant benefits of autonomous weapon systems. True, machines are faster than humans in collecting, processing and acting upon information. They are also more accurate in firing at their selected targets and thus reduce civilian casualties (Avery Plaw, Matthew S. Fricker & Brian Glyn Williams, ‘Practice Makes Perfect?: The Changing Civilian Toll of CIA Drone Strikes in Pakistan’), and are not subject to fatigue or emotional responses. These are military advantages. But there is also an ethical advantage. The machine assumes the risk of war (Ronald Arkin, ‘Lethal Autonomous Systems and the Plight of the Non-Combatant’, ASIB Quarterly, No. 137, 2013). For every unmanned weapon system deployed in a battlefield, at least one human soldier does not have to face that risk.
Autonomous weapon systems will never be bothered by a lack of mutual risk. The use of highly autonomous systems may remove the culpability of the human in the act of killing, an act to which humans face a psychological barrier (see, eg, David Grossman, On Killing: The Psychological Cost of Learning to Kill in War and Society (Little, Brown & Co, Boston, 1995)). But does the decreased personal responsibility in this area make it easier to ethically disassociate from the costs of war?
Aneta Peretko is a solicitor and the Chair of the South Australian International Humanitarian Law Collective, a group of young people who share an interest in the law of armed conflict. The views expressed in this article are solely her own.