Automatic vs. Automated vs. Autonomous; What they Mean to UAS and Unmanned Traffic Management (UTM)

The word “autonomous” is often connected to Unmanned Aerial Systems (UAS, or “drones”), perhaps because machine autonomy conjures up images from Terminator movies that fire the imagination. In fact, with the exception of some military systems there is a very small but growing group of applications that provision some level of autonomy to a platform. Many more fall in the “automated” or “automatic” categories. What do these terms really mean in detail and how do they relate to the current state of UAS development overall and, in particular, the current evolution of Unmanned Traffic Management (UTM) in the United States?

In his book An Army of None Paul Scharre defined three levels of machine intelligence:

Automatic—A machine reacts when a threshold is met or a specific event occurs. A drip coffeemaker makes coffee when the button is pushed if there is a filter in the basket, coffee in the filter, and water in the reservoir.

Automated— A system can handle more than one input and take a specified action if certain pre-specified criteria are met. A modern irrigation controller can start at a pre-programmed time and on a pre-programmed day, and often it can take into account type of sprinkler, type of flora (bushes, grass etc.), slope, season and other pre-programmed inputs, but then it can also respond to a temperature/rainfall gauge to ramp the amount of irrigation water up or down depending on downpour or drought. It can respond to multiple inputs and alter its output, but in the end it can only act by turning the water on and off.

Autonomous— Scharre approaches this by stating that an autonomous machine is a goal-driven system. A human sets the objective, the machine can then decide between a number of approaches based on other situational data it may have collected and the current status of the objective. Not many good examples exist; most of them are “supervised” systems, that is, before the system takes action a human approves that action. As stated before, most of the operational systems in this class are military systems, such as the Navy’s Aegis defense system, which inputs radar tracking data, computes the greatest threat amongst a large number of threatening targets, and fires missiles to counter those targets in order of greatest threat. Under most circumstances a human operator has to give the machine permission to engage, but after that the selection of targets and targeting of missiles is essentially autonomous. Another possible application that does not exist today, but may soon, would have a drone (UAS) assigned to search a collapsed building otherwise too dangerous to enter without re-structuring, such as discussed here.

Sense and avoid capability can also be thought of as a type of autonomy; a UAS detects an incoming object and determines a course of action to avoid collision. The avoidance maneuver can be complex, as a sudden change in direction or altitude (or both) without consideration of the current situation can put the drone in more trouble than it was in before. Sense and avoid is a critical capability for the search/rescue mentioned above, but it is also a critical capability for any drone that must fly in congested airspace as called out by the FAA Concept of Operations (ConOps) for Unmanned Traffic Management (UTM). The development plan called out by the ConOps is organized in successive capability levels that calls for sense and avoid as one of the final capabilities necessary for an effective UTM system. Thus, very nearly all commercial UAS systems in operation will have some level of autonomy expressed in sense and avoid.

Today sense and avoid is still largely in development. Smaller more capable sensors are coming on the market, and a few software developers are reporting significant progress. In a few short years we will have autonomous drone systems in our skies, and it is something to be looked forward to with anticipation.