The US Department of Defense (DoD) has outlined five principles to help keep artificial intelligence (AI) technology from running amok.

The principals, which the DoD released on 25 February, provide an ethical framework intended to guide the US defence industry’s development of AI, and the US military’s use of such emerging technologies. 

The DoD believes it must pursue AI or risk being leapfrogged by adversaries such as China and Russia, which could use the technology to prevail on future battlefields by, for example, using software to obverse and react to the USA faster. But the US military fears those countries are developing AI without proper safeguards.

“The stakes for AI adoption are high. AI is a powerful emerging and enabling technology that is rapidly transforming culture, society and eventually even warfighting,” US Air Force (USAF) Lieutenant General John Shanahan, director of the DoD’s Joint Artificial Intelligence Center, says during a briefing. “Whether it does in a positive or negative way depends on our approach to adoption and use. The complexity and the speed of warfare will change as we build an AI-ready force of the future.”

The DoD’s five principles include:

  • 1. Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment and use of AI capabilities.
  •  2. Equitable. The department will take deliberate steps to minimise unintended bias in AI capabilities.
  •  3. Traceable. The department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  •  4. Reliable. The department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  •  5. Governable. The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

The Pentagon acknowledges its principles are broad and need further refinement. That is by design, as much of how AI will be used remains unknown. The DoD aims to continuously refine the guidelines.

Specifically, the US government wants to further develop procurement guidance, technological safeguards, organisational controls, risk mitigation strategies and training measures for the use of AI.

Two notable USAF AI projects include the Skyborg programme, which seeks to develop software to autonomously control attritable unmanned air vehicles (UAV), and Project Maven, a computer vision development project to help commanders quickly identify enemy combatants amid a torrent of surveillance video footage gathered by UAVs such as the General Atomics Aeronautical Systems MQ-9 Reaper.

MQ-9 Reaper flies a training mission over the Nevada Test and Training Range Credit - USAF - 2

Source: US Air Force

MQ-9 Reaper flies a training mission over the Nevada Test and Training Range

AI ARMS RACE

China has also made AI its top priority.

“Their intent is to move fast and aggressively, with high levels in investment and extraordinary levels of people to advance AI,” says Shanahan.

He says Beijing has not yet surpassed the USA, but wants to be ahead by 2030.

The USA is leading currently thanks to its technology industry, academic institutions and culture of innovation, as well as other strengths, says Shanahan.

“The United States has very deep structural advantages,” he says. “Those structural advantages won’t be in place forever.”

The DoD is concerned China and Russia will race to implement shoddy AI in military service without enough thought to unintended consequences or harm to bystanders.

“What I worry about with both countries is they are moving so fast that they are not adhering to what we would call mandatory principles of AI adoption,” says Shanahan. “We will not field an algorithm until we feel it meets our performance and standard.”

AUTOMATED WAR

Whereas soldiers and commanders were once held responsible for decisions on the battlefield, AI’s ability to act independently could entangle technology designers and buyers – seemingly making software developers and government procurement officials accountable.

”The real hard part for this is taking the AI delivery pipeline and understanding where those ethics principles need to be applied,” says Shanahan. For instance, he says the military and developers must consider the source and quality of data used to develop AI. “Is the data representative of a very small sample size, as opposed to a very diverse set of data that would be necessary to develop a high-performing algorithm?,” he says. “[It goes] all the way to things like test and evaluation.”

The USA maintains its AI technologies must have a “kill switch” to allow overseeing operators to stop errant software from doing unintended harm. The software may even need another set of software to supervise it.

“Some of those principles will need unique technology solutions to help bring those to life,” says Shanahan.