Remotely piloted aircraft such as the MQ-9 Reaper and RQ-4 Global Hawk are manned by squadrons of pilots and sensor operators on the ground. Five or 10 years from now, however, that may no longer be the case, as full autonomy for air vehicles is well within the Air Force’s technical reach.
According to USAF officials, artificial intelligence and other technology advances will enable unmanned systems to make and execute complex decisions required for full autonomy sometime in the decade after 2015.
Advances in information management, vehicles, and weapons have opened the door to highly complex applications of autonomy with far less human intervention in the mission timeline. Threat is a driver, too: Technical advances in autonomy can improve reaction time and chances for mission success in contested or denied airspace.
The Pentagon says full speed ahead. In November 2012, then-Deputy Secretary of Defense Ashton B. Carter issued new guidelines on autonomous weapons development. The guidelines authorized combatant commanders to incorporate more weapon systems with autonomy into operational missions.
The intent was to pursue operational advantages and “allow commanders and operators to exercise appropriate levels of human judgment in the use of force,” according to the policy directive.
Two more thumbs up came from the undersecretary of defense for acquisition, technology, and logistics, Frank Kendall III, and the vice chairman of the Joint Chiefs of Staff, Adm. James A. Winnefeld Jr., when they released an updated unmanned systems roadmap in 2013.
“Autonomy in unmanned systems will be critical to future conflicts that will be fought and won with technology,” the roadmap noted.
Autonomy refers to what a machine can do by itself. The concept started out as a way to reduce the workload of human operators by transferring partial operations to a machine process—e.g., an airplane’s autopilot mode.
Dark and Light
Autonomy technologies stand to make a major difference in the contested battlespace—but they will be contested in public debate, too. Increasing levels of autonomy stir controversy when they touch on deep-seated fears and values surrounding the use of force. At issue is whether repositioning the elements of human control alters the concept of legitimate action.
Discomfort persists. “Drones are a technological step that further isolates the American people from military action,” law professor Mary L. Dudziak said, according to The New Yorker in a 2009 article. The release of the November 2012 guidelines stirred calls for an executive order stating that lethal and nonlethal attack with fully autonomous weapons violates the law of war.
Intriguingly, there is a vocal group on the other side, too. These scientists see autonomy as a means to reduce error and enhance the legitimacy of the use of force. While some decry the growth of autonomy, others have pointed out it can subtract human weaknesses from combat. Full-scale robots “would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the rules of engagement,” hypothesized a California Polytechnic State University team sponsored by the Office of Naval Research. These robots could even “act as objective, unblinking observers on the battlefield, reporting any unethical behavior back to command,” they said in the report “Autonomous Military Robotics: Risk, Ethics, and Design.”
Taken to the extreme, autonomy theoretically enhances legitimacy. “Future generations may come to regard tactical warfare as properly the business of machines and not appropriate for people at all,” noted Thomas K. Adams in a 2001 article for the US Army War College’s journal Parameters, reprinted in 2011.
A consensus on the proper roles for autonomy is lagging behind the technical possibilities. For example, most in the debate agree that weapon autonomy is more acceptable for self-defense of a fixed air base or a platform such as an aircraft carrier at sea. Automated close-in defensive fires systems like the Phalanx 20 mm gun were designed to search, track, and engage automatically. Land-based Phalanx systems deployed extensively at forward operating bases in Iraq recorded more than 100 intercepts by 2010. Current DOD policy explicitly approves supervised semiautomatic weapons like Phalanx when they are used to thwart time-critical or saturation attacks on manned installations. In other words, automated self-defense systems to protect human life are considered well within bounds.
Problems arise when those distinctions blur. Does pre-emptive attack against a missile launch site by an autonomous system fit the criteria? Would having human commanders set the mission parameters skate under the barrier, or does the input have to take place within a specified period of time? The point is that sanctioning autonomy only as a defensive weapon will soon be too small a fig leaf. Questions about offensive employment of autonomous weapons cannot be avoided.
One way ahead could be to subject autonomous systems to blue-suit evaluation and discipline. Writing in 2002, an Air Force Research Laboratory team took on the challenge of setting up autonomy metrics. “The great insight was this: We are designing algorithms, agents if you will, to replace pilot decision functions. Machines replace humans—so why not look at the human effectiveness community for metrics?” The AFRL team pointed to the OODA (observe, orient, decide, and act) Loop as an obvious choice for the Air Force. But the team’s insight is broader. Autonomous operations will remain within a larger framework of the human joint force commander’s mission and intent. There’s every chance to keep ethics and efficiency in the loop.
Expect the Air Force to be closely engaged with both the operational and policy issues surrounding autonomy technologies. As with many technologies before it, autonomy puts USAF again at the leading edge of major changes in the art of warfare.
Thus, humans remain in control. As the Air Force’s “Unmanned Aircraft Systems Flight Plan 2009-2047” put it: “Humans will retain the ability to change the level of autonomy as appropriate for the type or phase of mission.”
Researchers have long understood that machines may be more skillful than humans at many tasks. An early guideline on autonomy was proposed by psychology professor Paul M. Fitts in 1951. It addressed the distinction between man and machine head-on. Fitts was studying air traffic control when he developed his list as “a general answer to the problem of dividing responsibility between men and machines.”
Fitts grouped six tasks under the heading “Men Are Better At” and five more as “Machines Are Better At.” Based on the technology of the early 1950s, Fitts gave humans the edge in storing data for long periods of time and in perception of dim light and faint sounds. Both categories would probably be awarded to machines today. However, Fitts also gave humans the advantage in improvisation, inductive reasoning, and judgment—as most would again today.
Military operations with higher levels of autonomy developed quite recently. Autonomous aircraft were flown prior to and during the Vietnam War, but it was the mid-1990’s advances in software and the wide availability of precision satellite guidance that made systems such as the MQ-1 Predator reliable enough for routine operations.
The real dilemma is not the current level of autonomous systems. For all their notoriety, the Predator/Reaper family can be seen as just a waypoint on the road to fully autonomous systems. The next applications of autonomy could greatly decrease the human crew intervention in the mission timeline.
In summer 2012, the Defense Science Board completed a study of autonomy commissioned by the deputy secretary of defense. The starting point was that autonomy is here to stay. “Unmanned vehicle technologies, even with limited autonomous capabilities, have proven their value to DOD operations,” stated the report, “The Role of Autonomy in DOD Systems.”
The study then raised the issue of finding the appropriate cognitive level for handoffs between human control and software autonomy. The DSB report also acknowledged that “allocations may vary by mission phase as well as echelon.”
Notably, all current DOD unmanned systems are remotely operated; they can default to true automation only briefly and “in extreme circumstances, such as a lost link condition,” as DOD puts it. Making the distinction “is important because our community vernacular often uses the term ‘autonomy’ to incorrectly describe automated operations,” the report chided.
The debate on autonomy is likely to heat up. The near future holds both technological advances and mission requirements that will keep the spotlight on this development.
Just what does increased performance of autonomous flight technology portend for the Air Force? Autonomy could spread in several ways and USAF is poised to be at the center of it.
The first application will be greater autonomy for individual vehicles. More than a decade ago, researchers at AFRL led by Bruce T. Clough defined a fully autonomous system this way: “The UAV [unmanned aerial vehicle] receives goals from the humans and translates that into tasks, which it does without human intervention. The UAV has authority to make all decisions.”
Systems are close to employing dynamic tasking where the vehicle itself can select its next move. The most advanced vehicles like Global Hawk already have programmed in subroutines that can cover significant portions of their missions. General Atomics Aeronautical Systems notes the Predator B can be flown as remotely piloted or “fully autonomous.”
Recognizing this, the Air Force laid out goals for full mission autonomy for air vehicles in the 2009 UAS flight plan. Milestones such as autonomous flight, automatic target engagement, and command of autonomy were anticipated for the 2015 to 2025 time period.
Dynamic tasking would permit automatic selection of flight and mission profiles by the aircraft itself. Crucial steps in the autonomy chain include avoiding collision, detecting other air vehicles, in-flight diagnostics, and mission replanning. While the choice could be monitored, the decision inputs would be carried out onboard the aircraft. Doing more onboard would make it possible to filter out human control through most or all mission segments.
Why push for more autonomy? It may be essential to completing missions in a contested environment.
Reapers over Afghanistan operated in a relatively permissive environment under full control of human operators using satellite links. Full autonomy in various types of air vehicles may be needed if satellite links between unmanned aircraft and their remote operator crews are hacked or disrupted. Remote operators can maintain near-constant contact with unmanned systems in a permissive environment. However, rapid, autonomous execution of part of a mission could be invaluable against anti-access systems.
In the case of an unmanned aircraft switching to autonomous mode in denied airspace, independent operation might also permit the aircraft to make onboard decisions about its sensor operations based on weather, mission priorities, etc. The fusion of intelligence and surveillance information has made this a near-term prospect.
Under this concept, speed improves as autonomous systems detect, process, and act on the information. Additional autonomy would be an advantage. A contested, denied access environment could require more autonomy just to complete the kill chain.
It’s possible that unmanned aircraft may be tasked to acquire targets and release weapons. The Pentagon’s 2012 policy left the door open for autonomous targeting but added restrictions against targeting humans. The guidelines also built in a safeguard by mandating that autonomous systems “complete engagements in a time frame consistent with commander and operator intentions, and if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.”
Programming in the commander’s intent could extend a long leash to autonomous missions. Under broad interpretation of this concept, human input sets parameters but hands off final task execution decisions to autonomous systems.
That may seem a bold step. But growing threats could urge it along.
The US is not the only nation pursuing autonomy. According to the DSB, it is also time for the US to plan explicitly for adversary use of autonomous systems. Likewise, the 2012 directive on autonomy stipulates that systems “function as anticipated in realistic operational environments against adaptive adversaries.”
Forming Up
Another step in autonomy goes beyond what one single aircraft can do. In the near future, autonomous systems could also engage in collaboration. Passive, line-of-sight links have been explored by researchers as a means to control unmanned formations either from a manned “lead” aircraft or from another unmanned vehicle. The goal is for followers to maintain relative range while the leader maneuvers. Software in the loop determines the guidance inputs.
Success in automated air refueling is a harbinger of more autonomy. However, the trick in recent tests has been for software to grasp and react correctly to the many minute inputs generated by two vehicles in close flight. The next step is formation flight of two, four, or more air vehicles.
All of this is within reach. The Air Force’s 2009 unmanned aircraft systems flight plan summed up specific projections for progress on technologies such as “see and avoid.”
“The same technologies that keep UAS from any airborne collision will also enable UAS formation flight,” the report said.
Teams of multiple vehicles coordinating movements without the constant intervention from human controllers is an alluring concept of operations. Research laboratories have already tested autonomous formation flight of small, unmanned vehicles, for example.
Of course, a group of autonomous vehicles has to stay in sync—one of the most difficult technical hurdles. The system as a whole will have to verify that the vehicles are receiving a single set of commands and executing them correctly.
Tactical mastery might come first as a partnership between manned and unmanned systems. The first application for fully autonomous vehicles could be within the manned-unmanned interface often abbreviated as MUM. The interface is already part of plans for next generation systems.
For example, “we’re talking about how manned and unmanned systems might work together” on an Air Force and Navy future air dominance project, said Defense Advanced Research Projects Agency Director Arati Prabhakar in April at the Pentagon.
Similarly, the Obama Administration’s concept for a new long-range strike family of systems includes teaming between a manned or optionally manned bomber and an unmanned strike or electronic warfare platform. As the manned–unmanned interface moves into the mainstream, MUM raises second-order issues. Long segments of flight in collaborative formation with profile changes would practically constitute an autonomous mission fleet.
So far, the autonomy discussion has centered on vehicles. However, operating a platform with no crew on board is not the only mode for autonomy. It also holds possibilities further up the command and control chain—specifically, in autonomous adaptive planning. Sensor and intelligence data processing may need to increase reliance on autonomy routines to perform operations at a faster pace.
The capability for such an application isn’t in doubt. Machines have long since demonstrated their prowess as logic tools. The computer Deep Blue beat champion Garry Kasparov at chess way back in 1997. It would not be far-fetched to assign to a machine the flow of forces, logistics, initial shaping operations, and even decisive operations in the campaign plan. (Computers already handle primary joint logistics processes.) The reason for doing so could be speed of planning, eliminating fatigue, or even just spitting out dozens of campaign plans for possible comparison.
Dealing with data faster has obvious military advantages. The Air Force has been hinting at this revolution for quite some time.
Former Chief of Staff Gen. John P. Jumper spoke often of the need for a self-forming, self-healing network to maximize command of data. In 2004, he described the value of data as seen in Operation Iraqi Freedom in March 2003. “Now, that networking was crude,” said Jumper. “It was machine-to-machine interfaces, but it was crude.” Airmen did it “on the chat networks at the speed of typing, not the speed of light.”
Part of the answer, of course, is more autonomy. The requirement for autonomy in information stems first from the sheer mass of data—which, coincidentally, was generated in large part by the plethora of unmanned systems.
Rapidly making sense of this data requires more automated processing. Referring back to the Fitts criteria, there is little question that the machine can perform data matches more quickly than human analysts. Then there is the unstructured information generated as text, video, social media, and more. The key is to add automated layers of data processing that conform to mission needs and present actionable information as quickly as possible.
This could be the second source of demand for more autonomy. In the 2000s, faster data processing enabled counterterrorism operations—but they unfolded over long periods of time in permissive airspace and uncluttered electronic environments.
To be sure, there are still many technical hurdles to clear as autonomy advances. Certain key enablers must be available in order to realize the full benefits of autonomy, according to DOD. The list includes mission planning that is easy to change, guaranteed precision navigation, and timing; better cross-cueing by sensors both on an offboard; and the major issue of how and when to disseminate data from autonomous systems to others engaged in a battle. Efficient use of bandwidth for data transmission is another major concern.
Add in contested environments, false targets, and an information-savvy foe and the need for autonomous information processing could grow by leaps and bounds.
Rebecca Grant is president of IRIS Independent Research. Her most recent article for Air Force Magazine was “How Many Aircrew?” in the January issue.