In what might be the dawn of a new era of aerial combat – and the beginning of the end for another – an artificially intelligent (AI) software program defeated a US Air Force (USAF) F-16 pilot in five simulated dogfights.
The competition on 20 August was the culmination of Defense Advanced Research Projects Agency (DARPA) Air Combat Evolution (ACE) programme, an effort to design an AI program that would excel in basic fighter manoeuvres.
The AI program, named “Falcon”, defeated a senior fighter pilot and F-16 Weapons Instructor Course graduate, call sign “Banger”, in a series of nearly flawless performances that took advantage of the computer program’s ability to fly more precisely and aggressively. The F-16 pilot competing against the AI decided not to provide his identity for “operational security reasons”, says DARPA.
“He’s an experienced operational fighter pilot with more than 2,000 flight hours in the F-16, including combat time,” says the research agency. The Falcon AI program was developed by defence contractor Heron Systems.
Prior to facing off against a human opponent, over the course of 18 and 19 August, Heron Systems’ Falcon dispatched AI programs from seven other teams, including Aurora Flight Sciences, EpiSys Science, Georgia Tech Research Institute, Lockheed Martin, Perspecta Labs, PhysicsAI and SoarTech. Simulated F-16s, controlled by various AI programs, flew against each other in a number of different aerial combat scenarios.
Ultimately, the Falcon AI program made short work of its human opponent, getting its gun sight on the adversary often within the first few times the aircraft turned against each other. Aircraft hits were called when a 914m (3,000ft) laser beam, representing gun range, protruding from the nose of a virtual jet crossed over another jet’s fuselage.
The AI program’s ability to fly precisely gave it the ability to disregard flight safety rules on which human USAF pilots are trained. That ultimately gave it an advantage in simulated combat, says DARPA program manager USAF Colonel Dan “Animal” Javorsek.
“We do not allow pilots to pass within 500ft of each other. That bubble and a restriction to take no greater than 135-degree gunshots, they were violating routinely,” he says. “The agents were capitalizing on precisely those limitations, which is in all honesty exactly what we want. The point of this exploration of what AI can do is that it can tell us and help us explore the tactics space that we just don’t accept from a risk perspective.”
The AI program not only flew more precisely, but reacted quicker, says F-16 pilot Banger.
“I may not be comfortable putting my aircraft in a position where I might run into something else – or take that high-aspect gunshot is a better way to say that. The AI would exploit that,” he says. “It is able to have a very fine precision control, with perfect-state information between the two aircraft. It’s able to make adjustments on a nanosecond level.”
For a human to go through the same tactical OODA loop (observe, orient, decide, act), it takes more time, says Banger.
“I had to observe that transition, reorient my thoughts and my game plan, make a decision, which translates into moving motor controls into that final pack and then applying the stick pressure or the throttle change that is required to execute in relationship to that orientation that I observed in the beginning,” he says. “And the loop just continues to repeat.”
In fact, in other ways the AI-controlled simulated F-16 and the human-controlled simulated F-16 were equally matched. For instance, the simulation did not allow the AI-program to fly its F-16 with any greater g-force.
That aircraft are limited by the amount of g-force a pilot can handle is a misconception, says Javorsek.
“The airplanes are not limited by the pilot. I don’t know of anyone who’s flown a combat mission with a 9g F-16,” he says. “The airplane is limited by the weapons and stores that it carries, the tanks that it carries.”
If anything the biggest limitation associated with manned flight is the life-support equipment, says Javorsek.
Building on what was demonstrated in the ACE programme, DARPA plans next to find ways to get manned and AI pilots to collaborate better together in combat: so-called manned-unmanned teaming.
“The trick is not so much your ability to do this one v, one manoeuvring, but rather how do I convey what I’m seeing out my window and how I perceive contextually the environment? How does that mesh with what my wingman sees?” says Javorsek, noting the difficulty of passing information between human pilots and AI programs. “How do we de-conflict from each other so that we prosecute an effective engagement against this adversary?”