Are they lethal autonomous weapons systems, with the tidy acronym “LAWS” – or killer robots? Either way, politicians, soldiers, society and the aerospace industry that serves them must grapple with the question: how far should we go in marrying artificial intelligence (AI) and unmanned air systems – or, to use their more emotive name, drones?

The fact is, LAWS already exist. As noted in our feature on loitering munitions, Israel Aerospace Industries’ lethal Harop can be set to autonomously detect and destroy anti-aircraft batteries, after hunting down their radar emissions. Air-defence systems, such as the USA’s Raytheon-produced Patriot, also have an autonomous capability.

Harop - Israel Aerospace Industries

Israel Aerospace Industries

In a defensive role, such autonomous systems – guided by carefully written rules of engagement – may be welcome.

But AI and electronics are so advanced that deploying killer autonomous systems in less clear-cut circumstances – say, to enforce a curfew – may be only a matter of choice. There are already legitimate ­worries about drone killings, commanded by distant “operators” attempting to follow rules of engagement while observing a scene indirectly through sensors. Kill-or-no decisions are being made without the benefit of the sensory immersion available to a ­soldier or pilot. The notion that ­algorithms and AI can be perfected to destroy only legitimate targets is delusional.

For sure, human control brings with it the risk of collateral damage. But if robots run amok, “justice” may be just a shrug of the shoulders and adjustment of a few lines of software.

It is, in short, time to decide how far drones will take us. Otherwise, in the military sphere the momentum of expedience and operational capability may take us there automatically.

Source: Flight International