π Science and technology π Geopolitics
Killer robots: should we be afraid?

Is the use of autonomous weapons inevitable?

Richard Robert, Journalist and Author
On November 9th, 2021 |
3 min reading time
Key takeaways
  • Certain NGOs and governments have attempted to obtain a ban on killer robots in campaigns such as Stop Killer Robots.
  • Arguments include dehumanisation, algorithmic bias, and lack of understanding by machines of the consequences of their actions.
  • However, rapid spread of these weapons systems would seem to have side-lined these discussions, leading to a robotic arms race.
  • Nevertheless, the idea that some form of human control is required, remains central to current discussions.

The rise of autonomous lethal weapons in the ear­ly 2000s has pro­voked polit­i­cal reac­tions. Since 2013, states and non-gov­ern­men­tal organ­i­sa­tions have been engaged in dis­cus­sions with­in the frame­work of the Con­ven­tion on Cer­tain Con­ven­tion­al Weapons, a Unit­ed Nations body based in Gene­va. A cam­paign to Stop Killer Robots was also launched by a group of NGOs, includ­ing Human Rights Watch Inter­na­tion­al. But now in 2021, it has become clear that these efforts have not been par­tic­u­lar­ly suc­cess­ful. The new arms race that has already begun makes an out­right ban very unlike­ly; the ques­tion has there­fore shift­ed to inter­na­tion­al regulation.

The arguments for a ban

The first argu­ment put for­ward by the Stop Killer Robots cam­paign is dehu­man­i­sa­tion: machines do not see us as peo­ple, but as lines of code. The sec­ond argu­ment is algo­rith­mic bias: facial recog­ni­tion used in some auto­mat­ic weapons sys­tems favours pale faces with strong fea­tures, repro­duc­ing insti­tu­tion­al dis­crim­i­na­tion against women and peo­ple of colour. The third argu­ment is the dif­fer­ence between a human deci­sion and a com­put­er deci­sion: machines do not under­stand the com­plex­i­ty of a con­text and the con­se­quences of their actions can under­mine the legal and social order.

Machines do not under­stand con­text or the con­se­quences of their actions… humans must remain in control.

There­fore, humans must remain in con­trol. There are oth­er ele­ments that sup­port this, such as the issue of legal lia­bil­i­ty, or the low­er­ing of the thresh­old for trig­ger­ing a con­flict. A drone war is a bit like a video game, poten­tial­ly allow­ing the de-respon­si­bil­i­sa­tion of par­ties at war. The last argu­ment is the arms race. This race has begun, and it is pre­cise­ly this race that explains the fail­ure of the cam­paign to ban autonomous weapons systems.

The failure of state-to-state talks

Along­side the cam­paign led by NGO activists, sev­er­al states have pushed for severe lim­i­ta­tion, too – around 30 have come out in favour of a com­plete ban. The UN Sec­re­tary Gen­er­al has spo­ken out repeat­ed­ly on the sub­ject, in strong terms1: “machines that have the pow­er and dis­cre­tion to kill with­out human inter­ven­tion are polit­i­cal­ly unac­cept­able, moral­ly repug­nant and should be banned under inter­na­tion­al law”. But the rapid spread of these weapons sys­tems (pro­duced by a grow­ing num­ber of coun­tries, some of which trade in them) has side­lined these discussions.

One rea­son for this fail­ure is that coun­tries advo­cat­ing an out­right ban have lit­tle weight in the inter­na­tion­al are­na, while the main pro­duc­ing and using coun­tries are heavy­weights: the US, Chi­na and Rus­sia are per­ma­nent mem­bers of the Secu­ri­ty Coun­cil. The prospect of a treaty was for­mal­ly reject­ed in 2019. The US and Rus­sia were the most hard-core objec­tors. Chi­na, while less vocal, is on the same side. The UK and France, the oth­er two per­ma­nent mem­bers of the Coun­cil, have long leaned towards a ban but have nonethe­less tak­en the indus­tri­al route of man­u­fac­tur­ing these weapons systems.

As we have seen in the nuclear field, the polit­i­cal cul­ture favoured by these pow­ers is to lim­it access to these advance­ments to an exclu­sive ‘club’ of coun­tries rather than to allow their own progress to be hin­dered. Yet the tech­nolo­gies involved in autonomous lethal weapons are large­ly devel­oped in the civil­ian world and will become increas­ing­ly wide­spread. In these con­di­tions, the major states are count­ing on their tech­no­log­i­cal lead to avoid being caught off guard.

The attempt to ban has there­fore turned into an effort to reg­u­late. The ques­tions asked become more tech­ni­cal: a pre­cise def­i­n­i­tion of auton­o­my, of legal respon­si­bil­i­ty. The dis­course of Human Rights Watch Inter­na­tion­al, for exam­ple, has shift­ed: while con­tin­u­ing to argue for a ban, the NGO demands “the main­te­nance of mean­ing­ful human con­trol over weapons sys­tems and the use of force”. Reg­u­la­tion would be based on the prin­ci­ples of inter­na­tion­al law: the oblig­a­tion to dis­tin­guish between civil­ians and com­bat­ants, the pro­por­tion­al­i­ty of means and ends, and the mil­i­tary neces­si­ty of the use of force. But con­vert­ing these some­times-abstract prin­ci­ples into tech­ni­cal solu­tions is not easy; hence the idea that some form of human con­trol is now cen­tral to the dis­cus­sions. As the tech­no­log­i­cal trend is towards increas­ing auton­o­my with a greater role for AI, it is between these two poles (human con­trol and AI deci­sion) that the future of autonomous weapons lies.


Our world explained with science. Every week, in your inbox.

Get the newsletter