The US Air Force (USAF) is denying reports that an unmanned aerial system (UAS) powered by artificial intelligence turned against its operators during an exercise.
Recent comments by a USAF test pilot at a Royal Aeronautical Society (RAeS) summit in London appeared to indicate such an event had occurred during a training exercise simulation.
While speaking at the 23 May RAeS event, Colonel Tucker ‘Cinco’ Hamilton, the USAF chief of AI test and operations, appeared to state that an AI-powered combat UAS had attacked its operators during a simulation, when orders from human overseers went against its mission objective.
According to the RAeS transcript of his remarks, Hamilton said the incident occurred during a suppression of enemy air defences exercise, in which the UAS was tasked with destroying surface-to-air missile sites on the ground.
“At times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Hamilton recalled. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he added.
Hamilton, who is a rated Boeing F-15 and Lockheed Martin F-35 pilot according to his LinkedIn profile, went on, saying the simulation team subsequently trained the AI not to attack its operator.
“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone, to stop it from killing the target,” he said.
The Pentagon now says the experienced test pilot misspoke.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal,” air force representative Ann Stefanek said on 2 June.
“The department of the air force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” she adds. “This was a hypothetical thought experiment, not a simulation.”
The RAeS is also confirming Hamilton’s original comments were misunderstood. The group says the USAF officer “admits he ‘mis-spoke’ in his presentation”.
“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton told the RAeS magazine publication Aerospace on 2 June. He echoes the Pentagon statement, describing his prior comments as a hypothetical example of the challenge posed by AI autonomy.
He adds the USAF has never tested a weaponised artificial intelligence, either in a simulation or the real world.
Although the USAF denies the incident occurred, it is certainly a scenario the service will have to address in the future. The air force’s Next-Generation Air Dominance (NGAD) fighter development programme is seeking to deliver a sixth-generation combat aircraft, one expected to operate extensively with other autonomous jets.
Little is known about these so-called “collaborative combat aircraft” (CCA), but the USAF is already actively testing at least one potential platform for that role: the Kratos XQ-58 Valkyrie.
The service is also believed to be testing the Boeing MQ-28 Ghost Bat, which recently made its first public appearance in the USA. The type is being jointly developed by Boeing and the Royal Australian Air Force in Australia, where flight tests are ongoing.
While some CCAs may fill non-lethal roles such as electronic warfare or in-flight refuelling, the pilotless jets are expected to eventually take on a combat role as well.
The US Navy has set the long-term goal of having at least 60% of the aircraft in its carrier air wings to be uncrewed. Service officials did not set a target date for that milestone when they revealed the goal in April.