4_reglementation
π Science and technology π Geopolitics
Killer robots: should we be afraid?

Is the use of autonomous weapons inevitable?

with Richard Robert, Journalist and Author
On November 9th, 2021 |
3min reading time
Key takeaways
  • Certain NGOs and governments have attempted to obtain a ban on killer robots in campaigns such as Stop Killer Robots.
  • Arguments include dehumanisation, algorithmic bias, and lack of understanding by machines of the consequences of their actions.
  • However, rapid spread of these weapons systems would seem to have side-lined these discussions, leading to a robotic arms race.
  • Nevertheless, the idea that some form of human control is required, remains central to current discussions.

The rise of autonom­ous leth­al weapons in the early 2000s has pro­voked polit­ic­al reac­tions. Since 2013, states and non-gov­ern­ment­al organ­isa­tions have been engaged in dis­cus­sions with­in the frame­work of the Con­ven­tion on Cer­tain Con­ven­tion­al Weapons, a United Nations body based in Geneva. A cam­paign to Stop Killer Robots was also launched by a group of NGOs, includ­ing Human Rights Watch Inter­na­tion­al. But now in 2021, it has become clear that these efforts have not been par­tic­u­larly suc­cess­ful. The new arms race that has already begun makes an out­right ban very unlikely; the ques­tion has there­fore shif­ted to inter­na­tion­al regulation.

The arguments for a ban

The first argu­ment put for­ward by the Stop Killer Robots cam­paign is dehu­man­isa­tion: machines do not see us as people, but as lines of code. The second argu­ment is algorithmic bias: facial recog­ni­tion used in some auto­mat­ic weapons sys­tems favours pale faces with strong fea­tures, repro­du­cing insti­tu­tion­al dis­crim­in­a­tion against women and people of col­our. The third argu­ment is the dif­fer­ence between a human decision and a com­puter decision: machines do not under­stand the com­plex­ity of a con­text and the con­sequences of their actions can under­mine the leg­al and social order.

Machines do not under­stand con­text or the con­sequences of their actions… humans must remain in control.

There­fore, humans must remain in con­trol. There are oth­er ele­ments that sup­port this, such as the issue of leg­al liab­il­ity, or the lower­ing of the threshold for trig­ger­ing a con­flict. A drone war is a bit like a video game, poten­tially allow­ing the de-respons­ib­il­isa­tion of parties at war. The last argu­ment is the arms race. This race has begun, and it is pre­cisely this race that explains the fail­ure of the cam­paign to ban autonom­ous weapons systems.

The failure of state-to-state talks

Along­side the cam­paign led by NGO act­iv­ists, sev­er­al states have pushed for severe lim­it­a­tion, too – around 30 have come out in favour of a com­plete ban. The UN Sec­ret­ary Gen­er­al has spoken out repeatedly on the sub­ject, in strong terms1: “machines that have the power and dis­cre­tion to kill without human inter­ven­tion are polit­ic­ally unac­cept­able, mor­ally repug­nant and should be banned under inter­na­tion­al law”. But the rap­id spread of these weapons sys­tems (pro­duced by a grow­ing num­ber of coun­tries, some of which trade in them) has side­lined these discussions.

One reas­on for this fail­ure is that coun­tries advoc­at­ing an out­right ban have little weight in the inter­na­tion­al arena, while the main pro­du­cing and using coun­tries are heavy­weights: the US, China and Rus­sia are per­man­ent mem­bers of the Secur­ity Coun­cil. The pro­spect of a treaty was form­ally rejec­ted in 2019. The US and Rus­sia were the most hard-core object­ors. China, while less vocal, is on the same side. The UK and France, the oth­er two per­man­ent mem­bers of the Coun­cil, have long leaned towards a ban but have non­ethe­less taken the indus­tri­al route of man­u­fac­tur­ing these weapons systems.

As we have seen in the nuc­le­ar field, the polit­ic­al cul­ture favoured by these powers is to lim­it access to these advance­ments to an exclus­ive ‘club’ of coun­tries rather than to allow their own pro­gress to be hindered. Yet the tech­no­lo­gies involved in autonom­ous leth­al weapons are largely developed in the civil­ian world and will become increas­ingly wide­spread. In these con­di­tions, the major states are count­ing on their tech­no­lo­gic­al lead to avoid being caught off guard.

The attempt to ban has there­fore turned into an effort to reg­u­late. The ques­tions asked become more tech­nic­al: a pre­cise defin­i­tion of autonomy, of leg­al respons­ib­il­ity. The dis­course of Human Rights Watch Inter­na­tion­al, for example, has shif­ted: while con­tinu­ing to argue for a ban, the NGO demands “the main­ten­ance of mean­ing­ful human con­trol over weapons sys­tems and the use of force”. Reg­u­la­tion would be based on the prin­ciples of inter­na­tion­al law: the oblig­a­tion to dis­tin­guish between civil­ians and com­batants, the pro­por­tion­al­ity of means and ends, and the mil­it­ary neces­sity of the use of force. But con­vert­ing these some­times-abstract prin­ciples into tech­nic­al solu­tions is not easy; hence the idea that some form of human con­trol is now cent­ral to the dis­cus­sions. As the tech­no­lo­gic­al trend is towards increas­ing autonomy with a great­er role for AI, it is between these two poles (human con­trol and AI decision) that the future of autonom­ous weapons lies.

1https://​news​.un​.org/​f​r​/​s​t​o​r​y​/​2​0​1​9​/​0​3​/​1​0​39521

Support accurate information rooted in the scientific method.

Donate