0_robotsTueurs
π Science and technology π Geopolitics
Killer robots: should we be afraid?

Can we justify the rise in military robots?

Richard Robert, Journalist and Author
On November 9th, 2021 |
4 min reading time
Alan Wagner
Alan Wagner
Assistant professor in the Department of Aerospace Engineering and research associate in the Rock Ethics Institute
Key takeaways
  • In theory, robotic soldiers don’t get emotional, or revengeful, or angry. But the possibility of an accident raises issues of responsibility and liability, which are of great importance in military matters.
  • Increased autonomy thanks to AI, as well as maximised lethality, raises a philosophical problem: is the prospect of human soldiers facing bloodless, incredibly efficient machines acceptable?
  • But future autonomous systems might be perfect at targeting, so such a “precise” war would be less bloody.
  • Advances in precision warfare might also drive a new kind of dissuasion.

Tech­nolo­gies for mil­i­tary robots have made sig­nif­i­cant progress over the last two decades, rais­ing issues about using autonomous robot­ic sol­diers for active engage­ment in bat­tles. What are the eth­i­cal concerns?

Alan Wag­n­er. There are pros and cons. First­ly, since robot­ic sol­diers don’t get emo­tion­al, revenge­ful, or angry they would – in the­o­ry – fol­low the rules of war very close­ly. This could pre­vent some of the worst atroc­i­ties that have occurred in wartime. In that sense, robots could poten­tial­ly be more eth­i­cal than human sol­diers. How­ev­er, the coun­ter­ar­gu­ment is that, cur­rent­ly, robot­ic sys­tems are gen­er­al­ly not capa­ble of dis­tin­guish­ing between civil­ians and sol­diers. As a result, there is a risk that robots would acci­den­tal­ly tar­get civil­ians. That being said, these two argu­ments are not mutu­al­ly exclusive.

The pos­si­bil­i­ty of an acci­dent rais­es ques­tions around respon­si­bil­i­ty and lia­bil­i­ty; this is the core of the eth­i­cal debate in its cur­rent ver­sion. One of our val­ues when it comes to mil­i­tary deci­sion-mak­ing is that a human is respon­si­ble for a decision.

The pos­si­bil­i­ty of an acci­dent is at the core of the cur­rent eth­i­cal debate. 

But respon­si­bil­i­ty is an extreme­ly dif­fi­cult notion when it comes to mil­i­tary robots. If a com­man­der autho­ris­es an autonomous sys­tem, is the com­man­der still respon­si­ble for its course of action? If the sys­tem makes mis­takes, how long does the author­i­ty per­sist? Over a fixed peri­od? Or only regard­ing cer­tain actions? These ques­tions need to be con­sid­ered more strong­ly, but also cod­i­fied, to decide what the lim­i­ta­tions of these sys­tems are and to deter­mine their bound­aries with regard to ethics.

Defin­ing respon­si­bil­i­ty and author­i­ty is a legal point, one that could be dealt with based on a set of rules. But there is also a philo­soph­i­cal prob­lem: is the prospect of flesh and bone sol­diers fac­ing blood­less machines acceptable?

It comes back to our val­ues and belief sys­tems. The ques­tion is not just about how unfair it would be for a sol­dier to face some Ter­mi­na­tor-like unstop­pable machine killer. And if the val­ue of both your mil­i­tary and soci­ety is such that only a human can decide to take anoth­er human’s life in a mil­i­tary con­text, then that would pre­clude the use of autonomous sys­tems for most bat­tles or oth­er mil­i­tary operations.

But debat­ing in such absolute terms is sim­pli­fy­ing the eth­i­cal ques­tion. You might have a val­ue sys­tem which favors max­imis­ing the safe­ty of your sol­diers. In that case, you may require autonomous robots in your mil­i­tary. Val­ues are often in con­flict with one anoth­er and there might be a trade off. The prin­ci­pal val­ue for most coun­tries is to not lose a war, because con­se­quences are high, not just on the bat­tle­field but for soci­ety on the whole. This leads to a dif­fi­cult chal­lenge: if one coun­try is going to devel­op autonomous sys­tems that have no eth­i­cal val­ues, but give them a strate­gic advan­tage, are you required to do so as well in order to not let them have that advantage?

Con­verse­ly, there is also the ques­tion of legit­i­ma­cy. If you win a bat­tle thanks to robots, will your adver­sary accept your vic­to­ry? Will you be able to real­ly make peace and put an end to the war? This is a key ques­tion, though it goes unno­ticed in eth­i­cal debates over mil­i­tary robots. And unfor­tu­nate­ly, we’re walk­ing right into it. Con­sid­er the Unit­ed States use of drone war­fare in Iraq. Evi­dence shows that when sol­diers weren’t at risk, the num­ber of drone attacks by the Unit­ed States went up; sug­gest­ing that when peo­ple are not at risk it could be eas­i­er to fos­ter wars, with more bat­tles. On the oth­er hand, in the recent Armen­ian-Azer­bai­jani war, the use of drones may have end­ed the war end more quickly.

In the past, mech­a­ni­sa­tion of war­fare has made it more cost­ly and blood­i­er, before a rever­sal. Could it be the case with robots?

It’s not clear whether robots will make war­fare blood­i­er. They could make it less bloody if the autonomous sys­tems are well devel­oped. Many years from now, autonomous sys­tems could become per­fect at tar­get­ing, com­plete­ly avoid­ing civil­ian casu­al­ties. Thou­sands of lives would be saved. There­fore, one has to be care­ful about the very notion of “killer robot.”

Thou­sands of lives could be saved, so one has to be care­ful about the very notion of killer robot.

We might not like pre­ci­sion guid­ed mis­siles, but they are a replace­ment for car­pet bomb­ing. The same hap­pened in civil­ian indus­tries such as agri­cul­ture, where after one cen­tu­ry of mass using of fer­tilis­ers, we are switch­ing to a pre­ci­sion mod­el. “Sur­gi­cal strikes,” an expres­sion used in the 1990s, was chal­lenged as just anoth­er pub­lic rela­tions mot­to. But the under­ly­ing trend, which is quite con­sis­tent with our val­ue sys­tems, is that we kept devel­op­ing tech­nolo­gies that would min­imise civil­ian casu­al­ties. The 90s were the begin­ning of pre­ci­sion war­fare, with most­ly pre­ci­sion guid­ed mis­siles. Things have advanced: we have pre­ci­sion recon­nais­sance and capac­i­ties for pre­ci­sion assas­si­na­tion, with long dis­tance guns able to kill just one per­son in a car.

It is a dif­fi­cult trade off to know whether we should have these tech­nolo­gies ver­sus do we have the wars that might result with­out them. The slip­pery slope argu­ment says it might become a bat­tle of who con­trols these tech­nolo­gies and the engi­neers able to devel­op them. But anoth­er argu­ment is that if heads of State and oth­er key deci­sion mak­ers can be tar­get­ed, pre­ci­sion war­fare can per­tain to the same dis­sua­sion log­ic as nuclear bombs, invit­ing all sides to show restraint.

Does the prospect of arti­fi­cial intel­li­gence alter these considerations?

The way arti­fi­cial intel­li­gence and robot­ics relate is that the robot is the machine, includ­ing the sen­sors, the actu­a­tors, and the phys­i­cal sys­tem. The arti­fi­cial intel­li­gence (AI) is the brain that makes the machine do things. They are high­ly con­nect­ed: the smarter the sys­tem, the more capa­ble it is. But AI is a vast field, encom­pass­ing every­thing from com­put­er vision and per­cep­tion to intel­li­gent deci­sion-mak­ing to intel­li­gent move­ment. All these things could go into robot­ic sys­tems and be used to make them more capa­ble and less prone to flaws and errors. Here AI is enabling pre­ci­sion, which rein­forces the argu­ments above.

Can AI replace peo­ple in this con­text? Again, the answer is not sim­ple. AI may replace the per­son for some deci­sions, non-lethal ones, or all deci­sions, or just some of the time, but not all of the time. We are back to the legal bound­aries and lia­bil­i­ty issues.

What it real­ly changes is the strate­gic para­me­ters of the deci­sion. We are talk­ing of kinet­ic war­fare here. When using, for exam­ple, drones to lead a charge, you risk much less than when you charge with sol­diers. Drones are just man­u­fac­tured items, easy to replace. You may nev­er lose the momen­tum if you can get it, which is a game chang­er strate­gi­cal­ly. You could imag­ine a bat­tle where you drop a bunch of robots in to con­trol a bridge and they do it for years. They don’t fade or sleep. They just sit there, and nobody cross­es the bridge unauthorised.

Our world explained with science. Every week, in your inbox.

Get the newsletter