Five Giant Leaps for Robotkind: Expanding the Possible in Autonomous Weapons
History teems with brutal ironies. The printed program for the November 29, 1941 Army-Navy football game included a photo of the USS Arizona with the caption, “It is significant that despite the claims of air enthusiasts no battleship has yet been sunk by bombs.” Just eight days before the Pearl Harbor attack, the destruction of several battleships by aircraft seemed impossible.
The biologist Stephen Jay Gould observed, “Impossible is usually defined by our theories, not given by nature.” The dividing line between the possible and the impossible is often contingent on our incomplete understanding of the nature of the world and the provisional assumptions we use to explain and to predict. Rarely do these assumptions align perfectly with reality. In the development of combat capabilities, we may behave as though the boundary that divides the possible from the impossible exists in nature, waiting for us to discover it and push our military power up to its very limits. But there is no such line anywhere but in our heads. The boundary is the product of our ideas about the realm of current possibilities and our limited understanding of uncountable future possibilities.
Standing at the beginning of the robotics revolution in warfare, we too frequently speak of impossibilities. In a recent speech, Secretary of Defense Ashton Carter said:
I’ll repeat yet again, since it keeps coming up, that when it comes to using autonomy in our weapons systems, we will always have a human being in decision-making about the use of force.
That is a clear assertion that full autonomy for lethal systems is not to be, at least according to the current secretary of defense, other senior defense officials, and DoD policy. It is easy to understand this position, given our current boundaries of understanding. First, technology is not yet discriminate enough to justify using lethal autonomous weapons, especially given American preferences for limiting friendly and noncombatant casualties. Second, these systems create a justifiable sense of dread. Nobody wants to hasten the robot apocalypse. However, we make an error when we use current technological limitations, our fears of killer robots, and legal arguments of jus in bello to assert the impossibility of lethal artificial intelligence as an effective military capability. This is a case of our theories constraining our imaginations, which in turn limits the development of potentially useful warfighting technologies.
We must recognize that while current policy does not change the nature of a pending reality, it may cause us to discover it later than our adversaries. U.S. leaders should support development of operationally effective lethal autonomous weapons systems now with the dual objectives of maintaining strategic capability overmatch today and participating in eventual arms control negotiations about these systems from a position of strength.
Asserting that unmanned systems will always have a human in the loop will constrain development of artificially intelligent military systems. Instead, leaders should identify key technological milestones for robotic systems to surpass human-centric military capabilities and then focus research and development on achieving those specific goals. In this essay, we identify what five “giant leaps” in capability portend for the development of fully autonomous and lethal weapons. Taken together, they provide developers inside and outside of the Department of Defense with a set of benchmarks that extend the realm of the possible.
Leap 1: The Hostage Rescue Test and Autonomous Discriminate Lethality
This test involves challenging robotic platforms to exceed optimal human performance in speed and discrimination in a series of hostage rescue scenarios. With its combination of high tactical speed, confined and confusing spaces, and sensory limitations, a hostage rescue scenario poses a significant challenge to military units. It is an equally stiff challenge to artificial intelligence, and features two major thresholds for the development autonomous systems. First, it requires precise discrimination between friend and foe. Second, the dynamics of the environment and presence of “friendly” hostages means lethal decisions occur too quickly to accommodate human oversight. The robots must be empowered to decide whether to kill without receiving permission in that moment. The standard is not perfection; even the best-trained human teams make mistakes. An effective “leap” threshold is for a robotic team to complete the task faster while making fewer mistakes than the best human teams. Doing so represents two major advances in robotics development: fine target discrimination in a degraded sensory environment and lethal action without human oversight or intervention.
Leap 2: The Paratrooper Test and Creating Order Without Communications
In the early hours before the D-Day amphibious landings, American paratroopers landed behind the defenses of Normandy. Difficult operational conditions left many troops dispersed over large areas, out of contact with each other, and unable to communicate with commanders. Under pressure to improvise under difficult circumstances, paratroopers formed ad hoc units, quickly organizing to fight and meet their mission objectives.
On a modern battlefield with ubiquitous sensors and electronic signature concealment, military units must be prepared to operate without persistent communications. In such an environment, human beings can still organize and function, and robotic teams must possess the same capability. Yet current Department of Defense Policy expressly forbids even the development of autonomous systems that can select and engage targets when communications are degraded or lost. Others have already suggested that this is a mistake, and some senior leaders have acknowledged the limits of the current approach. Effective robotic systems need to be able to organize spontaneously, communicate, and act collectively in pursuing an objective without calling back to their human commanders.
The paratrooper test involves scattering robotic platforms in a communication-deprived environment and challenging them to form operationally effective teams and to coordinate their collective behavior in the successful achievement of an operational objective.
Leap 3: The B.A. Baracus Test and Improvising Materiel Solutions
On the 1980s TV show The A-Team, the character B.A. Baracus was a gifted mechanic responsible for improvising materiel solutions to the team’s problems (usually converting a regular vehicle into an armored car). Silly as it was, B.A.’s mechanical magic captured a reality of conflict: the need to adapt existing equipment to unanticipated operational problems. In war, enemies adapt. These adaptations reveal the shortcomings of existing solutions, which in turn often require materiel adaptation and improvisation. History is full of examples, from the Roman creation of the “corvus” (crow or raven) grappling device to overcome Carthaginian superiority in naval maneuver to the U.S. Army’s “rhino” modification of the Sherman tank to burst through the Normandy hedgerows.
The B.A. Baracus test challenges a robotic team to manipulate physical resources to modify or create equipment to overcome an unanticipated operational problem. This test is crucial for fully autonomous military systems. Nano-technology and additive manufacturing suggest that such a capability is not as outlandish as it seems. Of course, such a challenge can vary in sophistication. The basic premise is that machines must be able to improvise modifications to themselves or to other machines using materials on hand to be effective as a fighting force. This capability represents a major and necessary advance in autonomous systems.
Leap 4: The Spontaneous Doctrine Test and Finding New Ways to Fight
The competitive conditions of war will require more than just adjustments in materiel or operational objectives. The introduction of mobile and intelligent autonomous machines into the operating environment demands innovation in how human-machine teams organize and fight against similarly equipped adversaries. During the Vietnam War, early American successes in massing infantry using helicopters resulted in adaptation by the North Vietnamese and Viet Cong. As Stephen Rosen observed in Winning the Next War, enemy forces became less willing to engage U.S. units in open combat, preferring traditional guerrilla tactics. These changes prompted further doctrinal innovation by the Americans.
Current approaches to the use of unmanned systems place them inside established doctrine with a corresponding organization by domains. This may not be optimal or appropriate for artificially intelligent systems. Effective robotic systems must be able to experiment rapidly and independently with different ways of fighting.
The spontaneous doctrine test involves deliberately placing a robotic system in a situation for which it is suboptimally organized or equipped for an objective, and then allowing it to explore different ways of fighting. We should expect that the unique characteristics of robotic systems will cause them to organize differently around a military problem than humans would. We must challenge autonomous systems to organize dynamically and employ capabilities based on the competitive conditions they face, spontaneously developing ways of fighting that are better-suited to the evolving conditions of future combat.
Leap 5: The Disciplined Initiative Test and Justified vs. Unjustified Disobedience
Effective autonomous systems must be able to recognize when altering or even contradicting the orders of superiors is justifiable because those orders do not support achieving a higher objective. War is fought amid extreme uncertainty and constant change. Units must preserve the ability to adjust their objectives based on changing conditions. Two different situations especially require such adjustments. The first is a positive instance: when junior commanders appropriately change objectives to achieve greater gains, or what the U.S. military terms “disciplined initiative” in command. This refers to the power of subordinate commanders to alter their objectives or even exceed the stated objectives of senior commanders when circumstances require it. This is a form of justifiable disobedience, and good senior leaders do not object to it. The second is a negative instance, when junior commanders refuse to obey an order because it is illegal, immoral, or excessively risky.
The disciplined initiative test challenges teams of robots to use disciplined initiative in both positive and negative instances, giving them orders that are inappropriate to actual battlefield conditions and allowing them to decide whether to follow orders or devise another approach. The test should be conducted without the ability to communicate with commanders, requiring subordinate systems to adjust their objectives independently based on their new understanding.
What Next? Sacrifice and Transcendence
The discussion thus far has focused on how to create benchmarks for the development of autonomous weapons with the capability of exceeding human-operated systems. Before concluding, let us ask two final questions for future consideration.
First, how do we think about the development of the desire for self-preservation in a robotic weapons system? In Star Trek II: the Wrath of Khan, Spock sacrifices himself to save the Enterprise, explaining as he dies, “the needs of the many of outweigh the needs of the few… or the one.” Human beings (or Vulcans) have a strong sense of self-preservation, but they are also capable of overcoming that instinct in seeking a higher goal, such as preserving the lives of others. A desire to survive and a willingness to sacrifice are both necessary for effective militaries. Without a sense of the value of life, a military will waste itself by taking pointless or avoidable risks. Conversely, without a willingness to sacrifice, a military will not take the risks necessary to achieve worthy objectives.
High-quality robotic systems will not be cheap. For the military to use them effectively, these systems must have a desire for self-preservation. Yet they must also be able to recognize when choosing certain destruction is the right thing to do. Effective autonomous systems must have the ability to choose between self-preservation and “the needs of the many.”
Second, how do we empower artificial intelligence to develop the potential of system-wide artificial intelligence to greatest effect? A robot can distribute its cyber “mind” across numerous platforms in the air and space, on land, and on or under the sea. Integration can be intuitive and seamless, and the artificial intelligence perceives and acts simultaneously across all of these areas. Military domains exist only because of the cognitive limitations of humans in developing an understanding of the development and uses of different military instruments. An advanced artificial intelligence does not have the constraints that require such a division of labor. For robots, domains need not exist as distinct “joint” functions in the way they do for humans. Artificial intelligence can transcend the physical domains that organize and constrain human combat development and military operations.
The future of autonomous weapons is intimidating. We cannot allow our trepidation about that future to prevent us from shaping and controlling it. If we stand aside, others will take our place, and they may create the nightmarish world that we fear.
Dr. Andrew Hill is an Associate Professor at the U.S. Army War College, and Director of the Carlisle Scholars Program. He is a regular contributor to War on the Rocks.
Col. Gregg Thompson is an instructor in the Department of Command, Leadership, and Management at the Army War College. Previously, he served as the Director for Capability Development and Integration at the Maneuver Support Center of Excellence, Fort Leonard Wood, Missouri.
Because all robots have either ‘a human in the loop’ or ‘humans in the gestation chain’ each robot must have either Zero-defect components or sufficient autonomy to detect and correct internal faults.
Because autonomy relies on software and because software is the lowest quality artifact mankind produces the ability to achieve and sustain Zero-defect resources and interactions is ground zero.
This must not be overlooked. How to assess human constructs for integrity is a fundamental challenge. No amount of testing will be sufficient because as Prof. Dijkstra warned us decades ago, “Testing shows the presence, not the absence of bugs.” We must adopt new ways of vetting autonomous units and swarms thereof.
The biggest problem with any machine that we live with today is that the millions of lines of codes causes it to crash with either bugs or the processor cant handle the weight and crashes. In the battlefield that is often chaotic, how much risk that we can take if such a system fails when its essentially designed to be totally autonomous. What happens if the signal is hacked??? What happens during the fight, the autonomous system turns against the soldiers, never mind civilians??? How much can be tolerated. All systems fail. The more complicated, the more chances it fails. Unless an operator is in the loop, how much trust can we put on it???
First, let’s stop talking about artificial intelligence like it’s in the next room. AI doesn’t yet exist, and when/if it comes about, may take a different form or different motivation (we’re talking intelligence, so there’s a certain amount of will) than we understand. So put AI to the side for now.
Short of a truly independent intelligence, then, autonomous weapons are controlled by software written by humans, or machines governed by rules written by human programmers. Thus human error is introduced simply by exposure. Given that, human monitoring or human-in-the loop control is inevitable.
Terrific Piece.Reference
Focus on the Gould quote: “Impossible is usually defined by our theories, not given by nature.” If we don’t we get stuck in the arguments that AI and Autonomy are different, not here yet, man-in-the-loop, etc. These are all distractions. Massive technology changes coming over the next couple of years, let’s not miss it because we are lost in policy arguments.
Assume it’s all possible…the difference becomes more crucial. The authors postulate not just weapons acting autonomously under specific limitations, but with a decision-making capability on when, where, and how to engage in combat:
“The disciplined initiative test challenges teams of robots to use disciplined initiative in both positive and negative instances, giving them orders that are inappropriate to actual battlefield conditions and allowing them to decide whether to follow orders or devise another approach. The test should be conducted without the ability to communicate with commanders, requiring subordinate systems to adjust their objectives independently based on their new understanding.”
If we’re talking simple (in comparison) autonomy, we’re still in the realm of human-designed software, with rules we understand. Jumping from there to AI is much more powerful stuff — not just giving the machine the ability to modify software, but to modify the rules…one of humanity’s longest-lived nightmares, back to the Golem legend.
Leap 1 & 2 can be readily implemented now, the rest will have to wait a bit. I actually think hunter/killer robots are a more HUMANE weapons of war – especially for replacing two weapons: Cluster-bombs and Landmines.
For killing dispersed softer targets over a wide area, a cluster bomb is what would normally be called on. However, you leave behind toy like bombs that kill/maim civilians for years afterwards. Instead, you can drop a “pack” of hunter/killers into the area. Imagine that loud ass dog robot the marines were testing – now put a chainsaw on the front. That running through your Command position will focus attention on bugging out! You can have it kill, maim, and destroy vehicles within a proscribed area for a proscribed time. You can even make them herd people into a smaller area for humans to take over, if you wish. When they are done, they won’t harm the civies for years afterwards.
A similar pack can be used for area denial instead of mines. They can hide, lie in wait, and pounce on anyone in a clearly defined area. They can have programming that allows people to drop weapons and run, or to carry wounded away. They could even allow a corridor for unarmed humans to pass – being closely watched for hostiles of course. Again, they can be turned off or recalled and not leave an area riddled with surprises when hostilities end.
Horse pucky. Fused weapons are the simplest form of autonomous weapon. They make one decision: are conditions met to detonate? In fact, many modern mines and cluster submunitions are designed to either explode or deactivate after a certain amount of time passes. The fact that they don’t always simply indicates reliability isn’t 100%…it never is. Extend that to your more complex mechanical attack dogs. Assume said doggie can tell friendly, enemy, and neutral apart with a 99.9 percent reliability rate. Pity the thousandth person…and hope you’re not the government spokesman who has to explain it!
All, appreciate the comments to this topic. Autonomy in our view portends a fundamental change to the way future military systems will be designed and employed. As such we believe the possible should not be constrained by the limits of our current bias within policy, ethics, programming, and so on. This debate is important, and the professional discourse must continue in places like WOTR and elsewhere. From 8 Jan 17, a 60 Minutes report that perhaps better illustrates the point: http://www.cbsnews.com/news/60-minutes-autonomous-drones-set-to-revolutionize-military-technology/
Thanks.
Gregg Thompson