Technology

The problem with fully autonomous weapons

The recent launch of the Red Dragon suicide drone by US manufacturer AeroVironment caused a stir among nations who subscribe to the law of armed conflict.  

The advanced platform (pictured below) can strike a target up to 200km away with a 10kg explosive payload.   But what caught most people’s attention was its advanced AI brain which, if the operator so desires, means it can be configured to take on a fully autonomous mission where it selects and eliminates an enemy target on its own, with no human operator in the decision-making loop.   

The capability could, theoretically, boost any army’s fighting power, providing a potent offensive weapon while freeing up personnel for other roles.   

But whether the fully autonomous option would ever be used by UK armed forces is difficult to foresee.    At the current time such action is strictly off limits and looks set to remain so, despite the fact certain foreign states and terrorist groups have already deployed this technology.  

“There are major ethical and legal obstacles standing in the way of the UK doing the same,” says Lt Col Tristan Davies (AGC (ALS)), responsible for operational law for capability development at Army Headquarters.   

“Anything the UK fields has to strictly comply with international law.”   

The current position is that there must be so-called “context-appropriate human involvement” with AI, autonomy and weapons.  

“We have some systems with degrees of automation, such as the navy’s Phalanx automated close-in weapon system (shown right), but a human operator will always be in the chain somewhere, even if just to set the parameters for targeting or to switch objectives, or to shut them down,” explains the officer.  

Currently, international law effectively prohibits the use of fully autonomous systems because they simply do not meet legal requirements for deployment.   

“The rules mean any technology we use must be able to distinguish between military targets and civilians,” Lt Col Davies adds.   

“You can’t cause unnecessary suffering and any action must also be proportionate – so a fully autonomous drone would need to be clever enough to assess the potential collateral damage against the need for the attack itself.   

“Also, the AI must have the ability to cancel, suspend or amend the operation.   

“Imagine if an enemy soldier is waving his hands about, trying to surrender. Can the drone recognise that and not attack?    “And can it distinguish between a badly hurt individual and one lying-in-wait in a foxhole?”   

To kill personnel who are out of the fight through injury or having surrendered is against the laws of war and the UK takes its obligations in this area very seriously.  

“Under Additional Protocol I to the Geneva Conventions any new technology goes through an article 36 review, which determines whether the weapon is legal under existing international law,” continues Lt Col Davies.  

“In the UK we have Defence Futures, a tri-service body that undertakes legal reviews of new weapons as they are being developed.  

“Also, commanders are accountable under the law for anything used now or in the future.  

“So while fully autonomous systems may be here, it’s hard to see right now how their deployment would ever be permissible for the UK’s armed forces.”  

Whether this revolutionary tech will ever be clever enough to meet the high bar set by international law remains to be seen. 

However, it is not going away and according to Lt Col Davies the huge technical strides being made in AI mean law-abiding nations across the globe, including the UK, are spending more time than ever ensuring they can deliver assured systems and that any updates brought in do not break the rules.