As early as 1950, computer scientists such as Alan Turing were considering whether a machine might ever be capable of thought and, if so, what the implications of this might be for humankind. Turing opined that

I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. (Alan Turing, ‘Computing Machinery and Intelligence’ (1950) 49 Mind 433, 442)

The dawn of the twenty-first century has proven Turing’s quote more or less prophetic in substance. The development of automated, autonomous and artificially intelligent machines has the capacity to revolutionise the human existence. In particular, the rise of these machines has enormous implications for the conduct of warfare.

An autonomous weapons system (AWS) is one that is capable of operating, to a greater or lesser extent, without human intervention. Autonomous machines must be distinguished from automatic machines: whereas as automatic machine can be left to carry out a defined task under strict parameters with predictable results, an autonomous machine can comprehend and respond to varied situations without human input.

The question of whether AWS could ever be in compliance with international humanitarian law (IHL) has been thoroughly discussed, with conclusions ranging from adamant rejection to more favourable and nuanced critiques. Human Rights Watch, for example, published the dramatically titled report ‘Losing Humanity: The Case Against Killer Robots’, which called for a complete ban on AWS on the premise ‘that fully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-legal safeguards for civilians.’ Professor Mike Schmitt, on the other hand, points out that autonomous weapons may be (though will not necessarily be) more compliant with the laws of armed conflict than traditional military systems (Michael N Schmitt, Autonomous Weapons Systems and International Humanitarian Law: A Reply to the Critics).

These discussions, though crucial to the development of the law regulating AWS, overshadow an equally important but much less-considered challenge: the question of how international criminal law (ICL), the system of enforcement developed to promote accountability for violations of IHL, can be applied to crimes involving machines as perpetrators.

The most intuitive response to this question seems to be that the programmer ought to be liable. After all, one might assume that it is the programmer who designs the parameters that dictate the machine’s behavior. This, however, is an overly simplistic approach to what will likely be, in the coming decades, a complicated area of law. An AWS, rather than having one programmer and one user to whom liability may be clearly attributed, is likely to have been programmed by an entire team of developers and to be operated by a team of users (See, eg, General Atomics Aeronautical, Predator UAS (2014)). Moreover, it is likely to operate alongside human peers and commanders in a combat setting.

This raises several challenging questions. First, can a machine ever be liable for a crime in its own right? Secondly, in any event, how can we create accountability for any humans directly involved in a crime alongside an AWS? Finally, can that accountability extend along the chain of command?

These issues have been discussed at length elsewhere — individually by other authors, some of whom are cited in this work; and cohesively by this author in an undergraduate dissertation from which this work is adapted. The following discussion attempts to introduce the issues and frame what is likely to be a significant legal debate as AWS technology develops and becomes more widespread.

Can a machine commit a war crime?

Can a machine ever satisfy the mental elements of a criminal law that has evolved over centuries to moderate and punish human behaviour?

Questions like this tend to spark debates about whether the human capacity for logical, emotional and moral reasoning can ever be replicated in a machine. However, this debate is misplaced in an exploration of mens rea and machine liability because it conflates questions of law with existential questions of sentience, morality, and reason. IHL is silent as to ethical or moral reasoning. Decisions are either lawful or not lawful; within the scope of what is lawful, the law offers no moral guidance or judgment (See generally Dale Stephens, ‘The Role of Law in Military Decision-Making: Lawfare or Law Fair’ (SJD Thesis, Harvard University, 2014) ch 1). A person can be criminally liable for a breach of the laws of armed conflict regardless of their motives, their morality or their ethical reasoning.

The exact definitions and requirements of mens rea vary between jurisdictions and offences and have been discussed at length elsewhere. For the purposes of this discussion, intent is taken to require knowledge and volition: knowledge of the relevant act or omission and the circumstances or results, and volitional action to engage in the act and bring about the contemplated result (or at least volitional acceptance of the risk of the result) (See Prosecutor v Bemba (Pre-Trial Decision) [357]–[359], cited in Johan van der Vyver, ‘Prosecutor v Jean-Pierre Bemba Gombo’ (2010) 104 American Journal of International Law 241).

In a technical sense, knowledge is ‘the sensory reception of factual data and the understanding of that data.’ (Gabriel Hallevy, ‘Virtual Criminal Responsibility’ (2010) Original Law Review, citing William James, The Principles of Psychology (1890) and Hermann von Helmholtz, The Facts of Perception (1878). Although Hallevy applies the term ‘artificial intelligence’ to systems already in use, including in industry, medicine and gaming. His general discussion of machine liability is therefore applicable to the immediate future development of machines in warfare as well as many machines already in use). There are machines in operation today that possess knowledge in this sense. GPS units, fingerprint scanners, facial recognition technologies and medical sensors all use a combination of input devices and contextual information to receive, store and process knowledge in a similar fashion to the human brain.

Volition is another matter, and depends on the sophistication of the machine’s programming and its independence from human operators. A distinction must be drawn between a machine carrying out the task for which it was programmed, and a more sophisticated machine which was not programmed for a particular task, but was instead programmed with learning capabilities and the capacity to make autonomous decisions. In the former case, the intention does not belong to the machine, but to its human operator. Even in the latter case, it is difficult to draw a line between what the programmer designs a machine to do, and what the machine does of its own volition.

Clearly, there are significant questions about whether a machine could form mens rea. These questions might only be answered as the technology develops. In order to create accountability in the meantime, it is necessary to consider AWS in a broader context.

A gun, a soldier, or an innocent agent?

The ambiguity of machine intelligence means at least three legal options must be considered. The first is another intuitive response: why discuss the liability of machines at all? Under this approach, an AWS is no more than a gun or other weapon in the hands of a human operator. This makes sense when considering, for example, remotely-piloted Predator drones.

Equating an AWS with a gun makes less sense, however, where humans are the supervisors rather than the operators of the machines. Setting aside questions of use and command restrictions, the key feature of an AWS is autonomy; an AWS by its very definition has the capacity to perform functions independently of human input. It is this feature that places AWS in a fundamentally different class than an AK-47 (which requires contemporaneous human input) and an antipersonnel mine (which requires non-contemporaneous human input).

AWS and perpetration by another

That being the case, two options remain for situating the AWS in the framework of ICL. One is to treat the programmer or the human user of the AWS as a perpetrator-by-another (Hallevy, above, 11-13). In this approach, the machine is deemed capable of perpetrating the actus reus or physical elements of the offense, but incapable of forming the requisite mens rea or mental elements. This is more or less equivalent to the indirect perpetration model in article 25(3)(a) of the Rome Statute. The AWS is treated the same way as an infant or a mentally incompetent adult.

AWS and group criminal liability

The problem with this model is that, as discussed above, it is more simplistic than the real-world environment in which AWS are likely to operate. It is necessary to consider how the indirect perpetration model might work alongside group modes of liability. Fortunately, this is not a novel concept in ICL: the Pre-Trial Chamber of the International Criminal Court accepted in Katanga that group liability can apply to cases of indirect perpetration (Prosecutor v Katanga (Decision on Confirmation of Charges) cited in Jernej Letnar Cernic, ‘Shaping the Spiderweb: Towards the Concept of Joint Commission Through Another Person under the Rome Statute and Jurisprudence of the International Criminal Court’ (2011) 22 Criminal Law Forum 539).

AWS as perpetrators

Finally, AWS may be viewed as perpetrators in their own right (Hallevy, above, 10). This approach initially seems outlandish in light of today’s widespread technology. However, in the not-entirely-futuristic event that an AWS is programmed with machine learning capabilities and makes a decision that was not specifically dictated by a programmer or user, this might be the most rational approach.

In this last approach, the problem becomes one of accountability. A human can be fined, jailed, or even sentenced to death for a crime; these punishments are unlikely to have any impact on machines. Hallevy argues that as with corporate criminal responsibility, the punishment ought to be adapted to the perpetrator: corporations, for example, cannot be jailed but can be fined (Hallevy, above, 22-6). The difference, however, between corporations and machines is that when a corporation is punished, ultimately its human owners suffer. The same cannot necessarily be said of machines, and this is an area that warrants significant further consideration.

AWS and command responsibility

What liability for the commander of an AWS? Schmitt argues that under the ICL doctrine of command responsibility, the ultimate responsibility for a war crime committed by an AWS would lie with the military commander responsible for deploying the machine into the circumstances in which the crime was committed (Schmitt, above). The concept of holding a superior responsible for crimes committed by subordinates is an accepted principle of customary international law (See, eg, Prosecutor v Delalic et al (Appeal Judgment), Prosecutor v Limaj et al (Trial Judgment))

However, command responsibility is not vicarious liability (See generally Ilias Bantekas, International Criminal Law (Hart Publishing, 4th ed, 2010)), and the application of the doctrine in the context of AWS raises some important questions. The first is whether a commander can be held liable for a crime committed by a machine despite general doubt as to whether a machine can ever possess the requisite mental elements of a crime. The second concerns the nature and degree of understanding required before a commander can be said to have had ‘reason to know’ that a crime was about to be committed. The third is what would constitute ‘punishment’ in the context of a crime committed by an AWS.

With regards to the first question, if the law is reluctant to find that a machine is capable of forming mens rea, then it cannot be said that a crime has been made out for which the commander might be liable. The law as it stands therefore creates a significant gap in accountability for commanders of AWS.

The second question arises because as the algorithms used in AWS become increasingly complicated, it becomes increasingly less likely that a commander without extensive specialist training will understand the AWS in enough detail to have knowledge that a crime is about to be committed. It could be argued that a commander with even basic training regarding the AWS ought to have known, but this ventures dangerously close to presuming knowledge, an approach rejected by the ICTY in the Limaj trial. Again, this creates a gap in accountability.

Finally, as to the third question, it might be sufficient that a commander conduct ‘an effective investigation with a view to establishing the facts’ (Limaj Trial, above, [529]). This point is unsettled, though, and warrants further consideration. Moreover, it is not likely to be a politically palatable option in light of strong public sentiment against AWS.


AWS are no longer the realm of science fiction, and the international legal community (led by countries with advanced militaries including Australia and its allies) must seriously consider the implications of this. To date, almost all of this consideration has been dedicated to the compliance of AWS with IHL. The aim of this discussion has been to introduce some of the questions that will arise in the event that AWS, in design or eventuality, are not so compliant. While we are yet to discover whether such systems will actually be deployed, the research being undertaken to this end means that blanket denial is no longer helpful and the challenge must be acknowledged.

Sarah Ahern is a member of teaching staff at Adelaide Law School where she tutors International Law and International Humanitarian Law. This post is adapted from her undergraduate dissertation ‘The Limits of International Criminal Law in Creating Accountability for War Crimes Committed by Autonomous Machines’. You can contact Sarah at or on Twitter @SarahKAhern.