When the Only Option is 'Not to Play'? Autonomous Weapons Systems Debated in Geneva

Published August 21st, 2019 - 09:37 GMT
Rami Khoury / Al Bawaba
Rami Khoury / Al Bawaba

United Nations Secretary General António Guterres made headlines last November by calling on states to negotiate a ban on lethal autonomous weapon systems, calling them ‘morally repugnant and politically unacceptable’.

Heeding Mr Guterres’ call, representatives from 70 countries are meeting in Geneva this week to discuss concerns raised by lethal autonomous weapons, commonly known as ‘Killer Robots’. Attending alongside them are a range of representatives from NGOs, industry and advocates from the Human Rights Watch coordinated Campaign to Stop Killer Robots. This week’s meeting marks the 8th round of talks between members of the Convention on Conventional Weapons.


Whilst 28 countries and thousands of scientists and Artificial Intelligence experts have explicitly supported a ban on fully autonomous weapons, some of the world’s leading military powers continue to dig in their heels. Conventional Weapons talks which have been ongoing since 2014 have still not produced anything more than non-binding principles. Russia, the USA, Australia, Israel and the UK opposed calls to negotiate a new treaty on killer robots in March, a stance likely to be carried over into this week’s meeting according to early reports on the Geneva meeting.

Conventional Weapons talks which have been ongoing since 2014 have still not produced anything more than non-binding principles. Russia, the USA, Australia, Israel and the UK opposed calls to negotiate a new treaty on killer robots in March, a stance likely to be carried over into this week’s meeting according to early reports on the Geneva meeting.

In a recent report for The Centre for a New American Security, a Washington based Defence Think Tank, autonomous weapons expert Paul Scharre drew an important distinction between existing, semi-autonomous and automatic weapons systems and hypothetical autonomous weapons systems of the future.

Whilst currently existing automatic systems such as land mines and missile defence systems response to easily identifiable triggers and are usually subject to overriding human control, the potential systems of the future would exhibit some degree of ‘learning, adaptation or evolutionary behaviour’, allowing them to freely make decisions in ‘open and unstructured environments’. 

Whilst the development of autonomous weapons systems remain in its infancy, innovations in artificial intelligence raise the possibility of their deployment in the not too distant future.

In 2018, Chinese scientists revealed to the South China Morning Post that the People’s Liberation Army Navy was developing autonomous submarines suitable for reconnaissance, mine placement and ‘suicide attacks’ against enemy vessels, slated for deployment in 2018. Newer military drones such as the General Atomics MQ-9 Reaper can take off, land and fly to designated points without human intervention.

In 2018, Chinese scientists revealed to the South China Morning Post that the People’s Liberation Army Navy was developing autonomous submarines suitable for reconnaissance, mine placement and ‘suicide attacks’ against enemy vessels

A recent report by PAX, a Dutch NGO investigating autonomous weapons argues that innovations in commercial artificial intelligence in the fields of facial recognition, ground robotics and system integration may have implications for the defence sector. Reviewing the activities of 50 technology companies operating in 12 countries, the Report found that the activities of 21 companies should be treated with ‘high concern’ given their products’ implications for the development of lethal autonomous weapons. 

Representatives of the Campaign to Stop Killer Robots argue lethal autonomous weapons would be ‘unable to apply compassion or nuanced legal and ethical judgements to decisions to use lethal force’. Ethical concerns have long been at the centre of discussions on autonomous weapons systems, but a number of pertinent principles of International Human Rights Law have been invoked to argue for their prohibition.

Allowing ‘Killer Robots’ to act without human oversight could potentially violate legal principles concerning the rights to life and remedy. Importantly, experts often cite the Martens Clause in discussions of the legal implications of autonomous weapons systems.


Reassuring or spooky? The world's first operational police robot stands
to attention near the Burj Khalifa in downtown Dubai. (Giuseppe Cacace/AFP)

The Martens Clause, a long-standing custom in international law states that in the absence of any specific treaty, established custom, the principles of humanity and public conscience provide protection for civilians and combatants. In the case of proposed autonomous lethal weapons systems, the clause provides factors that states must consider when examining new challenges raised by emerging technologies.

According to Mr Scharre, many of the key concerns related to lethal autonomous weapons systems concern operational risk. His report argues that ‘autonomous weapons systems have a qualitatively greater degree of risk than equivalent semi-autonomous weapons that retain a human in the loop’. Given that autonomous systems lack the flexibility of humans to adapt to novel circumstances they may, in unexpected situations, make mistakes humans would not have. Two operational fears stand out in relation to autonomous weapons systems. 

In the first instance, in combat operations, adversaries may try deliberately manipulate weapons systems. Adversarial hacking through ‘spoofing’- sending false data, behavioural hacking- taking advantage of predictable behaviours to ‘trick’ systems into performing a certain way, or by direct take-over of systems could lead to weapons malfunctioning.

Revelations in 2015 that computer hackers were able to remotely take control of driverless ‘autonomous vehicles’ and carry out or disable vital driving functions such as steering and braking should prove a cautionary tale for proponents of ‘autonomous’ systems.

Revelations in 2015 that computer hackers were able to remotely take control of driverless ‘autonomous vehicles’ and carry out or disable vital driving functions such as steering and braking should prove a cautionary tale for proponents of ‘autonomous’ systems.

Secondly, unanticipated interactions with complex environments can cause fatal errors. Mr Scharre cites the first pacific deployment of Lockheed Martin F-22 fighter jets as indicative of this phenomenon.

Whilst crossing the international dateline, F-22 jets reportedly experienced technical difficulties leading to the shutting down of all onboard computer systems and nearly a catastrophic loss of aircraft. As systems and their environments become more complex and uncertain, the likelihood of fatal errors increases.

In the case of lethal autonomous weapons, failure may lead to autonomous weapons continuing to engage inappropriate targets until their magazines are exhausted. Interactions between malfunctioning weapons systems could lead to catastrophic consequences over wide areas. Enabling autonomous attacks could also lead to a rapid escalation of conflict.

As PAX note, ‘delegating decisions to algorithms could result in the pace of combat exceeding human response time, creating the danger of rapid conflict escalation’. Taking military decisions out of the command of human beings may therefore prove strategically disastrous. 

As PAX note, ‘delegating decisions to algorithms could result in the pace of combat exceeding human response time, creating the danger of rapid conflict escalation’. Taking military decisions out of the command of human beings may therefore prove strategically disastrous. 

Though security analysts rightly caution against adopting an uncritical assumption that new developments in military technology augur the coming of a ‘Robopocalypse’, the high stakes at play in debates over lethal autonomous weapons system suggest the need for international regulation. Recent recommendations by ASRC Federal, an intelligence and defence consultancy may yet prove apt: ‘like chemical and biological weapons, for weaponised AI, the only winning move is not to play’. 


© 2000 - 2019 Al Bawaba (www.albawaba.com)

You may also like