CENTER FOR ETHICS AND THE RULE OF LAW​

Magnifying human confusion: Meaningful Human Control and the ongoing debate on autonomous weapons 

Last week in Vienna, the Austrian government organized a two-day conference to further discussion and the hopeful development of a legally binding instrument to regulate autonomous weapons systems (AWS). The “Vienna Conference” brought in academics, political leaders, and civil society representatives from over 140 countries for the purpose of “further advancing the debate on an international regulation of AWS.” From the Keynote address to the final statement, the theme of “human control” over the use of force was front and center. In particular, the phrase “meaningful human control” was, and continues to be, the preferred term of art.   

Yet the concept of Meaningful Human Control (MHC) has been bantered about over the last ten years, and in this back and forth, it has been fundamentally misunderstood and misrepresented, largely by the same sets of actors (academics, political leaders, and civil society) for a variety of reasons. As one of the original authors of the concept of MHC, my remarks here are an effort to highlight that the notion of “human control” of relevance in MHC is a set of processes and rules, rather than physical control over weapons systems. This is a crucial point that, when obscured, not only confounds discussions about international legal regulation, but also undermines the very structures that proponents of regulation rely upon. 

The History of Meaningful Human Control 

In 2013, Richard Moyes the head and Managing Partner of the UK disarmament NGO Article 36, coined the phrase “Meaningful Human Control” (MHC). Here, he argued that when it comes to the use of autonomous weapons in armed conflict, the appropriate frame to view their use was through the concept of “meaningful human control over individual attacks,” including but not limited to possessing “adequate contextual information,” “a positive action” initiated by a human “operator,” and “accountability” for those human individuals involved in the planning and execution of the attack.   

From this point forward, however, Article 36 limited much of the content of what “meaningful human control over direct attacks” actually required, as well any detail of how this concept fits as an addendum to or a derivation of international law, or how it should be applied in domestic military policies, procedures, doctrine, training, concepts of operations or employment. This is unsurprising, though, because privately Moyes unabashedly proclaimed that MHC was a “political concept” designed to get States to a negotiating table and engaging in discussion on a legally binding prohibition of lethal AWS. To date, this is still Article 36’s official position. 

What Moyes was clear about from the very outset, was that in discussing MHC, the correct unit of analysis should be the “level of attack.” In other words, when estimating when an AWS would be lawfully used, it should be reviewed under existing international humanitarian law (distinction, proportionality, precaution, etc.) at the level of individual attacks. This makes sense conceptually, as under international law, attacks are “acts of violence against an adversary, whether carried out in attack or in defense in any territory,” and obligations are such that attacks can or cannot be permissibly directed at certain objects, objectives, etc. It makes somewhat less sense practically when we require weapons systems to undergo legal reviews, for if an AWS can be used discriminately, not violate obligations of superfluous injury or suffering, and the like, then the level of analysis does not necessarily matter. 

For Moyes, the level of attack was a constraint designed to limit to autonomous systems in time and space, such as through some form of geofencing or limits to size, weight, and power, to anticipate their potential effects. The level of attack also helped him to make his case for highlighting the difficulty of “preprogramming” a set of permissible target objects (or sensor inputs) for AWS to recognize and prosecute in a predictable and reliable manner, given the importance of context on permissible targeting.   

In 2014, Moyes and I began to work together on his concept, attempting to flesh out in finer grained detail what MHC requires from States and their militaries. We combined his early work on MHC with my expertise and work on AWS. Our work  was supported from a grant by the Future of Life Institute, which not only provided funds to convene experts and roundtables to further work on MHC, but also to create the first dataset on automation in weapons systems. By April 2016, we wrote a joint briefing paper for the State delegates and participants at the United Nations Convention on Certain Conventional Weapons’ (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems. This paper argued for greater clarity on the concept of MHC, and it identified three nested levels where MHC exists: ante bellum, in bello, post bellum.  

The identification of conceptual and temporal levels of control was necessary for not merely State delegates, but practitioners, militaries, engineers, lawyers, and many more to understand that MHC was about processes and rules created, instituted, and governed by humans. MHC is not, and has never, been an argument for direct physical control over weapons systems.    

This is an incredibly important distinction to be made. Arguments that humans must “pull the trigger” for each and every shot, or “push a button” for every engagement, to be “in the loop,” or for there to be a human “on the loop” or to be “overseeing the loop” or part of a “wider loop” all become confused, muddled and disingenuous. “The loop” itself becomes an unhelpful conceptual construct. First, if we are to go back to the earliest version of MHC, Moyes’ arguments do not entail physical control over an autonomous weapon system. Second, later arguments we made highlight the importance of not merely technical requirements of systems, but that because of the temporal layering, “there are systems, processes, and doctrines designed to uphold human control, and so it is appropriate that we view it beyond limited engagements.” Human control is viewed here widely as the set of structures humans create to manage the conduct of war, and MHC as applied to autonomous weapons is a way to provide processes and rules for their permissible use. If an AWS fails to comport with these established (and potentially new) rules, then the system is impermissible.     

Meaningful Human Control or Appropriate Human Judgment 

In December of 2016, I put forth another briefing paper to the State delegates at the CCW. This paper examined the similarity between the United States’ preferred term of “appropriate human judgment” and the concept of “meaningful human control.” This work, supported by the Canadian Department of National Defence, looked at whether or to what extent these two positions may be reconciled. In effect, I argued that they can be because they are essentially saying the same thing. Indeed, it is more of a “you say tomato, I say tomato” once we look to the requirements and interpretation of the law.      

On the one hand, I argued that the U.S.’ explicit assertion in its Law of War Manual that the obligations of distinction and proportionality apply to “persons” and not to the “weapons themselves” (because inanimate objects do not assume legal obligations) coalesced with the MHC framework in the same way. On the other hand, I also argued that these obligations imposed both negative and positive obligations on human commanders that tracked with existing U.S. military doctrine on “positive” and “negative” control.  Positive control is the “assurance that authoritative instructions to military missions be carried out,” and negative control is “the prevention of any unauthorized use.”    

The positive and negative control arguments make way for conceptual clarity on the notion of “authorization,” where one not only has the “authority” to undertake uses of force, but also given the structure of military command and control, the authority to delegate particular tasks in the prosecution of that use of force. Authorization and delegation of authority are crucial to discussions around AWS – both legally, practically, and in many instances technically.   Humans (and the offices or positions they hold) in military chains of command possess varying degrees of authority, and they may delegate (permissibly or impermissibly) that authority to others.   

It is beyond the scope of this blog to enter into an extended discussion of the parameters of permissible delegation, but the fact remains that this is the rub when it comes to AWS. It is merely disguised in a variety of forms: technical infeasibility; affronts to human dignity; violations of (non)existing legal rules. Yet nowhere in the discussion about MHC has there been any requirement for the absolute physical control over weapons systems because there has never been absolute physical control over any weapon system in the history of war. From slings to long bows to nuclear weapons, once released a weapon is not in physical control. There are ways in which we can technically reduce uncertainty about the effects of weapons systems, but these are all probabilities. Likewise, there are ways we can try to maintain “control” over the use of force, but these too are processes, rules and institutions, and do not in any way require physical control.   

Yet many either willfully or not prefer to hold onto this mistaken belief. Arguments that it is preferable to use older or legacy systems because there is a human pushing a button or pulling a trigger leads to highly illogical (and immoral) outcomes because there “was a human somewhere.” The logical inconsistency of requiring a dumb bomb or an unguided mortar that could cause more harm than using a system with a higher degree of accuracy seems at best wrong-headed and at worst inhumane. Additionally, this argument seems to stem from a need for humans to have temporal or physical proximity between combatants for there to be some sense of justice or fairness. While it is outside my scope here, all I can say is that at least in just war theory, there is no moral requirement for physical proximity, and temporal proximity may be arguable, in armed conflict.   

But the arguments for using precision guided munitions is also not the same as the argument for using AWS, and this is where political sleights of hand, technological confusion, or operational ignorance begin to show up. It is of no surprise that States will make declarations, or withhold support, for political reasons. Indeed, many of the first years of debate over autonomous weapons came with rounded criticism over “drone warfare,” despite the fact that the “drones” and operations in question were piloted by humans. Using the discussion about autonomous systems as a political platform to make statements about remotely piloted aircraft does not help clarify any issue.   

Likewise, the assertion that autonomous weapons will be as precise or accurate or more so than existing precision guided munitions is a red herring. This is because we are not talking about the same things: we are making category mistakes. A precision guided munition that relies upon GPS, terrain mapping, or particular sensory signatures to find and destroy a target has nothing to say about autonomyBecause autonomy is a behavior not a thing. For example, I may have an autonomous unguided bomb that autonomously behaves in way X. Likewise, I may have an autonomous precision guided munition. It is the “autonomous” part that matters. Moreover, we lose sight of the fact we are discussing the “weapons system.” A weapons system is the entirety of the system; that is the weapon (warhead, what-have-you), the related equipment, materials, services, personnel and means of delivery and deployment, that is required for self-sufficiency. So if Country A has an autonomous combat aircraft that flies to coordinates X, Y, uses its onboard processing capabilities to assess adversary threats, then selects the particular munition to attack that threat, deploys that munition, say an air to air missile, and then flies home, we would not state that it was the air to air missile that was the “autonomous weapon.” It was the entirety of the system – the airframe, the onboard automatic target recognition, the sensory suites, the munition, the comms networks, etc. – that was the autonomous weapon. 

This fictitious autonomous weapon system described above could be completely permissible by standards associated with MHC, as it could with “appropriate human judgment,” and it may very well pass weapons reviews. Or it may not.  But what we are assessing of that particular weapons system in a legal review is different than assessing whether it comports with MHC or appropriate human judgment because the legal review assesses whether and to what extent the weapon system can be used in accordance with IHL. MHC is a wider set of considerations. Unfortunately though, looking at the debate over the past 10 years has shown that even the assessment of what is required for “control” to be meaningful has become a set of sliding goal posts, where not only are the standards (technical and otherwise) debated but in some instances the very premise of nonphysical control is rejected. 

The Next 10 Years 

I am certainly not advocating for a position that there be no debate on technical standards. Technical requirements will continue to be up for negotiation and debate because much of the science around autonomy and assuring autonomous systems is still nascent. What I am stating, however, is that there should be no debate on the fact that there is a need for a process of responsible innovation, development, and deployment by militaries. As far as I have seen over the last decade, no State has publicly stated that it does not want to uphold such a process. While there is certainly disagreement within the CCW, there is continued movement between “like-minded” States, as well as increasing sharing of information and practices.   

The devil is always in the details, as well all know.  How States will envision the use of autonomous systems during armed conflict, where they will invest their limited resources, and how militaries will use these systems is still unfolding. International law is as much about black letter as it is about state practice. Yet MHC still has a strong following in many industry, academic, and international circles. And while I am happy to see that so much work and support around MHC continues, I am afraid that I have not seen much clarity on any issues result. The facts of the case remain the same now as they were 10 years ago, despite much ink spilt and many “AI advances” made.   

After 2016, I focused my attention away from MHC and to other areas of law, policy and ethics related to AWS and artificial intelligence. I have still followed these debates rather closely, and I was honored to be chosen as the Special Governmental Expert to the U.S. Defense Innovation Board’s AI Ethics Principles Project. That project yielded the set of 5 AI Ethics Principles for the US Department of Defense, which were signed into effect by then Secretary Esper in February of 2020. I have worked with and alongside a number of people on these issues over the years, from all segments of academia, defense, industry, and civil society. My opinions are merely my own, but they are informed from a rather wide aperture. I hope that in the coming decade we can minimize our human confusion over these topics, as it is going to be increasingly necessary if we are to uphold our legal and moral obligations as we move forward with artificial intelligence and autonomous systems deployed and used in armed conflict.  

Dr. Heather Roff is a Senior Research Analyst at the Center for Naval Analyses. She is author to numerous scholarly works, and she has held multiple academic and industry positions. She has held fellowships at the Brookings Institution, New America, and the University of Cambridge. 

Image: Pairat/stock.adobe.com

Mailing List

Submissions

Submissions to The Rule of Law Post. Please refer to CERL’s submission guidelines for additional details on the blog post format. Should your submission be accepted, we ask that you please complete the Agreement to Transfer Copyright.

Please upload text in one document under 6 mb. Preferred format as a simple text file (.txt).

Share Magnifying human confusion: Meaningful Human Control and the ongoing debate on autonomous weapons  on:

LinkedIn
Twitter
Facebook
Reddit
Email
Print
Magnifying human confusion: Meaningful Human Control and the ongoing debate on autonomous weapons