An unmanned aircraft launches conventional air-to-air weapons in this conceptual illustration. DARPA is developing an autonomous aircraft with the ability to counter adversarial threats. DARPA
Photo Caption & Credits

Beyond Pixie Dust

A framework for understanding and developing autonomy in unmanned aircraft.

Nearly every vision, strategy, and flight plan the U.S. Air Force has released over the past decade identified next-generation unmanned aircraft, autonomy, and artificial intelligence (AI) as technologies that are critical to securing a decisive combat advantage in future battlespaces. The future battlespace will not be entirely all manned or unmanned—it will be a hybrid. USAF warfighters have long envisioned using autonomous unmanned aerial vehicles (UAVs) to perform missions that otherwise require either human control, whether in the cockpit or remote.

Teaming such autonomous aircraft with manned fighters and bombers is the next step in the development process. The goal of manned-unmanned teaming (MUM-T) is to significantly enhance operational capabilities and capacity by combining the advantages of both manned and autonomous aircraft, including cost, survivability, and judgment. For MUM-T to work in the operational realm, manned and unmanned aircraft must be able to collaborate closely and in ways that are effective and trusted by human warfighters. Pragmatic reliability and dependability are key benchmarks, but the captain on the flight line will be the ultimate arbiter of whether these new solutions add value.

For that to happen, engineers and warfighters need a common understanding of how autonomous technologies map to combat performance. Yet as important as this is, the software algorithms that underpin their behavior and performance are generally not well understood outside technical circles. Although USAF’s war­fighters and acquisition professionals intuitively grasp the potential for autonomy and artificial intelligence to transform warfare, most lack in-depth knowledge of what is needed to make these algorithms combat viable. Instead, autonomy and artificial intelligence technologies are often treated as “pixie dust”—just sprinkle a little on top to solve hard problems and magically make weapon systems autonomous. It will take more than this cursory understanding to meet tomorrow’s demands.

The Air Force is rapidly evolving new concepts for teaming manned fighters and bombers with autonomous unmanned aircraft to perform strikes, counter-air, electronic warfare, and other missions. The goal is to significantly increase operational capabilities and capacity. Artificial intelligence, autonomy, and machine-to-machine learning is fueling an explosion of ideas on how AI can enhance existing capabilities. The challenge is getting these technologies across the chasm between the world of research and development and the operational force. 

To achieve that hybrid vision, the Air Force must develop a far more robust and shared understanding among engineers and warfighters than exists today. Warfighters lack sufficient comprehension of the kinds of autonomy that are possible and how much automation is appropriate; engineers often do not fully understand warfighters’ operational needs. Significantly, there exists no framework to help bridge these gaps. 

Fostering a better understanding of autonomy in general is necessary to prevent mistrust and miscommunication between strategic planners, operational warfighters, and aerospace engineers. National defense professionals urgently need a common framework that can help participants in these discussions demystify the technology and effectively communicate across their respective disciplines. 

Today’s Unmanned Aircraft

Remotely piloted aircraft like the MQ-1 Predator and MQ-9 Reaper have transformed elements of warfare over the past two decades, but largely in permissive environments where remote-control aircraft face few direct threats and where speed is not a requirement. As new threats emerge, the Air Force must look to the next step in unmanned aviation: autonomous aircraft that can operate effectively without reliance on extended-range datalinks, substantial satellite bandwidth, and intensive effort on the part of remote operators.

Current remotely piloted aircraft technologies are not viable in the highly contested battlespace of the future. A single “24/7” RPA orbit/CAP/line, for example, demands continuous, high-bandwidth connectivity and some 200 people. Long distances impose time delays in RPAs’ operational cycle; data from the aircraft’s sensors must be transmitted to remote operators, who assess it and determine the appropriate action before transit control signals can be sent back to the RPA. The process is slow and subject to disruption by sophisticated adversaries.

Warfighters

The inherent lag between control inputs and RPA responses can put RPAs out of synch with their environment. Lag times can be reduced by exchanging distant operators and global satellite links with forward-deployed pilots and low-latency line-of-sight datalinks, but in a spectrum contested battlespace, these control datalinks may be degraded or even unavailable, while the forward-deployed pilot presents adversaries with a valuable target. If datalinks are disrupted or denied, the RPA will “go stupid” and automatically revert to lost-link procedures such as flying a triangular pattern until it nears minimum fuel and returns to base. In 2009, Iraqi insurgents hacked an MQ-1 Predator feed to monitor and exploit its operations. While encryption was eventually installed to secure RPA control links and prevent such intelligence gathering, long-range, high-bandwidth datalinks remain crucial vulnerabilities in RPA operations.

More recently, the Department of Defense has pioneered manned-unmanned collaboration with impressive results, and it is time to take this partnership to a new level. Next-generation UAVs must match tactical speeds and dynamic maneuvering to effectively team with high-performance fighters and bombers in order to operate in contested battlespaces. A new type of UAV is needed for these new operational concepts. 

MUM-T Operations

The Air Force faces both a quantity and cost problem. It has fewer aircraft than it needs, and the cost to buy new ones is more than it can afford. Unmanned systems, without the life-support requirements of manned aircraft, can be part of the solution. As Secretary of the Air Force Frank Kendall said, these unmanned platforms will be the key to giving the Air Force “the quantity we need at a reasonable cost.” 

Future autonomous teaming aircraft (ATAs) must be capable of flying, maneuvering, managing sensors, and executing missions all without a human to provide close control inputs. 

Broadly conceived, ATAs will be wingmen to manned flight leads who monitor their autonomous operations and direct them only as necessary. In command-and-control terms, this means humans will be tactically “on the loop” for ATA operations, instead of “in the loop,” as they are with today’s RPAs. Autonomous teammates will fly, maneuver, and contribute to the flight’s mission with varying levels of independence while human flight leads or mission commanders will retain positive control in order to verify and consent to any weapons employment. 

The U.S. Air Force, other agencies in the Department of Defense, and the defense industry are all engaged in programs to develop autonomous functionality. The Air Force Research Laboratory’s Skyborg program aims to develop “full-mission autonomy” in a Low-Cost Attritable Aircraft System. Skyborg is not an aircraft, but an open-system architecture of autonomous technologies intended to be broadly compatible with a range of aircraft. Skyborg autonomy took flight in 2021 aboard both a Kratos Mako drone and a General Atomics RQ-20 Avenger. Both aircraft successfully navigated inside required airspace boundaries, responded to navigation commands, demonstrated coordinated maneuvering, and honored flight performance envelopes. 

Two_View_Framework:_How_it_Applies_to_a_Missile_Truck

The Defense Advanced Research Projects Agency’s Air Combat Evolution (ACE) program has also pursued AI capable of maneuvering in relation to a highly dynamic fighter aircraft. ACE’s “Alpha Dogfight” virtual trials tested its AI against a human pilot in basic dogfighting maneuvers; the AI won in all five engagements. Lockheed Martin’s Have Raider MUM-T demonstrator, Northrop Grumman’s autonomous Model 437 aircraft, and Boeing’s Loyal Wingman UAV have likewise demonstrated the potential to deliver new AI-enabled ATAs.

An Autonomy Framework

The UAV framework now used by DOD and the Air Force establishes five categories or “groups” for unmanned aircraft. Under this categorization scheme, unmanned aircraft are assigned to groups primarily based on their gross takeoff weights, although their normal operating altitudes and airspeeds are also considered. But in the context of autonomous aircraft, this grouping is no longer appropriate. Weight, altitude, and airspeed are no longer the correct metrics for grouping aircraft intended for MUM-T operations.

In its place, the Air Force needs a framework for addressing unmanned aircraft that brings clarity, coherence, and rigor to its pursuit of autonomous capabilities. Such a framework should:

  • Provide a consistent structure for developing autonomy capabilities; 
  • Engender greater fidelity in describing autonomous capabilities for developing concepts of operation; 
  • Enable rational and deliberate prioritization of autonomy-enabling technologies;
  • Clarify the role of humans in autonomous aircraft operations;
  • Establish common reference points for all stakeholder disciplines, from science and technology, to acquisition, operation, and policymaking; 
  • Empower policymakers to make informed tradeoffs between capabilities, risks, and costs;
  • Encourage specificity and precision in language to reduce miscommunication and misunderstanding among stakeholders.

The ultimate objective: This autonomy framework for unmanned aircraft should facilitate better communications between warfighters and engineers. 

We propose a two-part framework that addresses unmanned aircraft, respectively, from the perspectives of the warfighter and the engineer. The Warfighter View, in which we break down into Core, Mission, and Teaming, mirrors pilot cognitive tasks and are intended to be intuitive to warfighters and their requirements for how autonomous systems should perform. “Core” encompasses flight control inputs and navigation functions necessary for autonomous flight, and breaks down into “Aviate” and “Navigate” responsibilities intended to capture the basic and advanced flight skills learned in pilot training. 

Core Aviate refers to all automatic features and functions that enable the aircraft to fly during all phases of flight. The core responsibility for pilots is to always control their aircraft, whether managing an autopilot, using digital flight control technologies, or manually manipulating controls that move aircraft flight control surfaces. This can be seen as “stick and rudder” operations—making continuous flight control inputs that cause specific aircraft responses within very short feedback loops—the basic and advanced aircraft and flight control skills ranging from takeoff, to climb, level off, turn, descend, accelerate, decelerate, approach and land. More tactically, one might think of flying at high angles of attack, setting the lift vector, establishing roll rates, and pulling Gs. Each aircraft will have unique attributes associated with its design, and unique tradeoffs necessary in speed, altitude, and thrust to successfully perform specific actions. These maneuvers must be in relation to the physical world, including weather and terrain features, runway locations, and the aircraft’s available fuel, as well as the battlespace environment. Finally, the Aviate subcategory includes preventing and handling flight-related contingencies and emergencies such as wing stalls, engine failures or battle damage, like the loss of one or more control surfaces.

In a manned-unmanned teaming construct, an F-35 pilot would direct, but not operate, its unmanned wingmen, which could operate autonomously and augment the F-35 with additional weapons and sensors. Zaur Eylanbekov/Mitchell Institute

Core Navigate tells Aviate where an aircraft should go to accomplish a mission and breaks down into absolute and relative navigation. “Absolute Navigation” covers route planning and determining a course to fixed locations in space, avoiding terrain and no-fly zones, and remaining within permissible airspace boundaries. “Relative Navigation” covers an aircraft’s position and vector relative to weather, other aircraft and the battlespace. Relative Navigation functions include avoiding bad weather and collisions, conducting aerial refueling, flying in formation, maneuvering to engage dynamic targets, avoiding threats, and taking other offensive or defensive actions.

he “Mission” category includes functions necessary to accomplish mission-related tasks such as managing sensor operations or releasing weapons on targets. This is a complex category that spans multiple iterative temporal loops that inform and drive each other. For example, a combat pilot must always consider what had already happened in the battlespace that either constrain or enable current or future options; make decisions and actions now to mission execute; and simultaneously think about, prioritize, and plan for future actions, maneuvers, and other mission options—and assess how these desired options affect current decisions and actions. The Mission category interacts with Core and Teaming functions to achieve desired operational outcomes.

“Teaming” covers functions and features necessary for autonomous UAVs to conduct operations in collaboration with other manned and unmanned aircraft. Teaming encompasses all elements of tactics and mission integration in modern combat operations. Like the framework’s other categories, mission timing and scale are critical elements to success. Coordinating, integrating, and synchronizing individual actions across time and space with mission partners is essential to achieving desired operational effects. Teaming functions include flying, maneuvering as part of a team, information sharing within aircraft formations and with external entities, and synchronizing the effects multiple teammates create in the battlespace.

Five Levels Of Autonomy 

We propose subdividing each of these three major categories into five levels of autonomy, ranging from Level 1–minimal automation to Level 5, which is full autonomy. 

The automotive industry follows a similar model. The Society of Automotive Engineers’ (SAE) J3016 “Levels of Driving Automation” framework defines six levels of driving automation—from no automation (Level 0) to full automation (Level 5), which defines vehicles that require zero input from a human driver. 

Mitchell’s proposed framework has three levels of automation and two levels of autonomy in each of the three Warfighter View categories. Automation is an action or set of actions that are performed according to predetermined rulesets when commanded by a user. Autonomy transforms inputs to outputs according to a more general set of rules by drawing on a deep stack of inter-connected decision-making algorithms fed by volumes of data from multiple sources. In order of increased decision-making capability, Level 1 is Low Automation, Level 2 is Partial Automation, and Level 3 is Full Automation. Level 4 is Semiautonomous and Level 5 is fully Autonomous. 

Full automation still requires humans to assume a supervisory role for unpredicted stimuli; anyone familiar with advanced flight management systems, autopilots, and auto-throttles can understand this level of automation. From takeoff, climb, enroute, descent, approach and landing, the aircraft performs its assigned tasks exactly as prescribed by the human. 

Level 4 and 5 systems act in an unscripted way. A human may still dictate the tasks the aircraft is to perform, but now the direction is more of a “mission command” or effects-based tasking rather than specific control and direction. The machine may determine the order and manner of execution and some tasks may not be performed at all. The machine will perceive its environment, its internal state (such as how much fuel is left), or even the activities of its teammates in its “decision” process. Thus, the behavior of a semi-autonomous or autonomous system is logical, but not always predictable. The difference between Level 4 Semi-autonomous and Level 5 Autonomous is the robustness of an unmanned system’s ability to manage unanticipated things that may occur during a mission and how much supervision and direction is required by the flight lead. 

Using levels to define these requirements will help aerospace engineers, technologists, and warfighters describe and understand with greater precision what unmanned aircraft can and cannot do. The criteria and language derived from the Warfighter View will be the basis for conveying to engineers, senior leaders, policymakers, and industry how warfighters intend to use the platform operationally. 

The Engineer View

This same approach can be applied to explain the engineering perspective. We propose an Engineer View that can serve as a kind of “true north” for developing unmanned aircraft systems, ensuring the collection of functions and technology they design aligns with the warfighters’ vision for how it will be used. 

The following are illustrative examples of functions, technologies, and data relevant to the Core, Mission, and Teaming autonomy categories, and help make these relationships less abstract. 

Core Aviate key functions and sub-functions include maintaining aircraft altitude, adjusting its pitch angle, executing coordinated turns, deflecting control surfaces, and adjusting engine power. The technologies necessary to implement these functions may include fly-by-wire flight controls, air data sensors, and a digital flight computer. Core Aviate auto-features will require similar data to information provided by a traditional human-readable flight instrument cluster such as aircraft altitude, airspeed, and bank angle along with more detailed data such as the angle and rates for roll, pitch, and yaw.

The MQ-9 Reaper was a game changer over the past two decades, but to operate in tandem with manned aircraft and in contested airspace, future unmanned systems must be able to fly autonomously—that is, without the aid of a remote pilot. Airman 1st Class William Rio Rosado

Core Navigate auto-features will be supported by functions such as flight path planning, waypoint following, and navigation relative to other aircraft. Relevant technologies might include cameras, radars, and path planning algorithms. To support navigation, the aircraft’s systems will need to access data such as the aircraft’s current location, altitude, airspeed, and groundspeed as well as the location of any known obstacles or threats and the boundaries of permissible airspace and no-fly zones. Relative navigation will require data on the distances, closure rates, and vectors from the ATA’s manned and unmanned teammates, other friendly forces, and threats. 

In the Mission category, relevant functions include sensor operation and determining aircraft positioning for optimal sensor performance, target identification, and task sequencing. Technologies might include sensors, hardware, and software for processing sensor data, task optimization algorithms, or neural networks trained for automatic target recognition. Data needs for Mission may include aircraft orientation, distance to a target, munitions remaining, and training data to “teach” a systems algorithms sequencing and decision-making. 

For Teaming, functional analysis will determine what information should be shared across different teammates or other entities to allocate and coordinate their tasks for an operation such as cooperatively engaging a target. Technologies for sharing this information would be things like computers for on-board data fusion, and algorithms that allocate tasks, and radios to transmit data. Data needs may include current location, the locations of teammates, both raw and processed sensor data, or a database of proven tactics and techniques.

In practice, the Engineer View would translate the desired operational capabilities into the hardware and software components needed to create a fielded system.

Recommendations And Conclusion

To gain the fullest benefits of a new framework, early and continuing close collaboration is needed to better link warighters and engineers throughout the lifecycle of unmanned systems. We propose four key steps to achieve this objective:

  1. The Air Force needs an autonomy framework to guide its next-generation UAV requirements definition, acquisition programs, and CONOPS and TTP development. Air Force warfighters, aerospace engineers, and acquisition professionals lack a framework today that helps them gain a shared understanding of autonomy and how it can be applied to future MUM-T operational concepts and aircraft.
  2. The proposed split-view framework offers a model to facilitate greater collaboration between warfighters, technologists, and aerospace engineers. Based on the mental tasks and functions of combat pilots, this framework can help warfighters understand autonomous systems in operational terms they are familiar with, and then translate those operational perspectives for technologists and aerospace engineers to guide systems development. This approach facilitates communication and collaboration to accelerate development and fielding of MUM-T UAVs, without constraining either warfighters or engineers. 
  3. The Air Force Deputy Chief of Staff for Strategy, Integration, and Requirements (AF/A5) should have formal ownership of this framework, in collaboration with the Deputy Chief of Staff for Operations (AF/A3), Air Combat Command, and Global Strike Command. With a mix of combat-experienced operators and planning infrastructure, AF/A5 has both the charter and operational expertise to apply the autonomy framework effectively across the range of necessary stakeholders. On the Air Staff, the AF/A3 has deep ties into the operational community, the Air Force Warfare Center, and the warfighting major commands. Together, the AF/A5 and AF/A3 can champion and implement the Two-View Autonomy Framework for Unmanned Aircraft in the service’s requirements definition and the acquisition and development processes.
  4. Stakeholders across the enterprise should use this framework to guide autonomy research, development, and experimentation, as well as to inform the development of new  CONOPS and TTPs. Using the framework to its fullest potential will require the A5’s and A3’s staff, operators, acquisition professionals, technologists, and industry to maintain a tight and collaborative interaction throughout the requirements definition, acquisition, and development life cycle. Employed throughout aircraft’s life cycle, this framework can encourage greater creativity as warfighters and technologists collaborate to develop innovative autonomous teaming aircraft, CONOPS, TTPs, post-fielding experiments, and continuing modernization upgrades.