0_robotsTueurs
π Science and technology π Geopolitics
Killer robots: should we be afraid?

Can we justify the rise in military robots?

with Richard Robert, Journalist and Author
On November 9th, 2021 |
4min reading time
Alan Wagner
Alan Wagner
Assistant professor in the Department of Aerospace Engineering and research associate in the Rock Ethics Institute
Key takeaways
  • In theory, robotic soldiers don’t get emotional, or revengeful, or angry. But the possibility of an accident raises issues of responsibility and liability, which are of great importance in military matters.
  • Increased autonomy thanks to AI, as well as maximised lethality, raises a philosophical problem: is the prospect of human soldiers facing bloodless, incredibly efficient machines acceptable?
  • But future autonomous systems might be perfect at targeting, so such a “precise” war would be less bloody.
  • Advances in precision warfare might also drive a new kind of dissuasion.

Tech­no­lo­gies for mil­it­ary robots have made sig­ni­fic­ant pro­gress over the last two dec­ades, rais­ing issues about using autonom­ous robot­ic sol­diers for act­ive engage­ment in battles. What are the eth­ic­al concerns?

Alan Wag­n­er. There are pros and cons. Firstly, since robot­ic sol­diers don’t get emo­tion­al, revenge­ful, or angry they would – in the­ory – fol­low the rules of war very closely. This could pre­vent some of the worst atro­cit­ies that have occurred in war­time. In that sense, robots could poten­tially be more eth­ic­al than human sol­diers. How­ever, the coun­ter­ar­gu­ment is that, cur­rently, robot­ic sys­tems are gen­er­ally not cap­able of dis­tin­guish­ing between civil­ians and sol­diers. As a res­ult, there is a risk that robots would acci­dent­ally tar­get civil­ians. That being said, these two argu­ments are not mutu­ally exclusive.

The pos­sib­il­ity of an acci­dent raises ques­tions around respons­ib­il­ity and liab­il­ity; this is the core of the eth­ic­al debate in its cur­rent ver­sion. One of our val­ues when it comes to mil­it­ary decision-mak­ing is that a human is respons­ible for a decision.

The pos­sib­il­ity of an acci­dent is at the core of the cur­rent eth­ic­al debate. 

But respons­ib­il­ity is an extremely dif­fi­cult notion when it comes to mil­it­ary robots. If a com­mand­er author­ises an autonom­ous sys­tem, is the com­mand­er still respons­ible for its course of action? If the sys­tem makes mis­takes, how long does the author­ity per­sist? Over a fixed peri­od? Or only regard­ing cer­tain actions? These ques­tions need to be con­sidered more strongly, but also codi­fied, to decide what the lim­it­a­tions of these sys­tems are and to determ­ine their bound­ar­ies with regard to ethics.

Defin­ing respons­ib­il­ity and author­ity is a leg­al point, one that could be dealt with based on a set of rules. But there is also a philo­soph­ic­al prob­lem: is the pro­spect of flesh and bone sol­diers facing blood­less machines acceptable?

It comes back to our val­ues and belief sys­tems. The ques­tion is not just about how unfair it would be for a sol­dier to face some Ter­min­at­or-like unstop­pable machine killer. And if the value of both your mil­it­ary and soci­ety is such that only a human can decide to take anoth­er human’s life in a mil­it­ary con­text, then that would pre­clude the use of autonom­ous sys­tems for most battles or oth­er mil­it­ary operations.

But debat­ing in such abso­lute terms is sim­pli­fy­ing the eth­ic­al ques­tion. You might have a value sys­tem which favors max­im­ising the safety of your sol­diers. In that case, you may require autonom­ous robots in your mil­it­ary. Val­ues are often in con­flict with one anoth­er and there might be a trade off. The prin­cip­al value for most coun­tries is to not lose a war, because con­sequences are high, not just on the bat­tle­field but for soci­ety on the whole. This leads to a dif­fi­cult chal­lenge: if one coun­try is going to devel­op autonom­ous sys­tems that have no eth­ic­al val­ues, but give them a stra­tegic advant­age, are you required to do so as well in order to not let them have that advantage?

Con­versely, there is also the ques­tion of legit­im­acy. If you win a battle thanks to robots, will your adversary accept your vic­tory? Will you be able to really make peace and put an end to the war? This is a key ques­tion, though it goes unnoticed in eth­ic­al debates over mil­it­ary robots. And unfor­tu­nately, we’re walk­ing right into it. Con­sider the United States use of drone war­fare in Iraq. Evid­ence shows that when sol­diers weren’t at risk, the num­ber of drone attacks by the United States went up; sug­gest­ing that when people are not at risk it could be easi­er to foster wars, with more battles. On the oth­er hand, in the recent Armeni­an-Azerbaijani war, the use of drones may have ended the war end more quickly.

In the past, mech­an­isa­tion of war­fare has made it more costly and blood­i­er, before a reversal. Could it be the case with robots?

It’s not clear wheth­er robots will make war­fare blood­i­er. They could make it less bloody if the autonom­ous sys­tems are well developed. Many years from now, autonom­ous sys­tems could become per­fect at tar­get­ing, com­pletely avoid­ing civil­ian cas­u­al­ties. Thou­sands of lives would be saved. There­fore, one has to be care­ful about the very notion of “killer robot.”

Thou­sands of lives could be saved, so one has to be care­ful about the very notion of killer robot.

We might not like pre­ci­sion guided mis­siles, but they are a replace­ment for car­pet bomb­ing. The same happened in civil­ian indus­tries such as agri­cul­ture, where after one cen­tury of mass using of fer­til­isers, we are switch­ing to a pre­ci­sion mod­el. “Sur­gic­al strikes,” an expres­sion used in the 1990s, was chal­lenged as just anoth­er pub­lic rela­tions motto. But the under­ly­ing trend, which is quite con­sist­ent with our value sys­tems, is that we kept devel­op­ing tech­no­lo­gies that would min­im­ise civil­ian cas­u­al­ties. The 90s were the begin­ning of pre­ci­sion war­fare, with mostly pre­ci­sion guided mis­siles. Things have advanced: we have pre­ci­sion recon­nais­sance and capa­cit­ies for pre­ci­sion assas­sin­a­tion, with long dis­tance guns able to kill just one per­son in a car.

It is a dif­fi­cult trade off to know wheth­er we should have these tech­no­lo­gies versus do we have the wars that might res­ult without them. The slip­pery slope argu­ment says it might become a battle of who con­trols these tech­no­lo­gies and the engin­eers able to devel­op them. But anoth­er argu­ment is that if heads of State and oth­er key decision makers can be tar­geted, pre­ci­sion war­fare can per­tain to the same dis­sua­sion logic as nuc­le­ar bombs, invit­ing all sides to show restraint.

Does the pro­spect of arti­fi­cial intel­li­gence alter these considerations?

The way arti­fi­cial intel­li­gence and robot­ics relate is that the robot is the machine, includ­ing the sensors, the actu­at­ors, and the phys­ic­al sys­tem. The arti­fi­cial intel­li­gence (AI) is the brain that makes the machine do things. They are highly con­nec­ted: the smarter the sys­tem, the more cap­able it is. But AI is a vast field, encom­passing everything from com­puter vis­ion and per­cep­tion to intel­li­gent decision-mak­ing to intel­li­gent move­ment. All these things could go into robot­ic sys­tems and be used to make them more cap­able and less prone to flaws and errors. Here AI is enabling pre­ci­sion, which rein­forces the argu­ments above.

Can AI replace people in this con­text? Again, the answer is not simple. AI may replace the per­son for some decisions, non-leth­al ones, or all decisions, or just some of the time, but not all of the time. We are back to the leg­al bound­ar­ies and liab­il­ity issues.

What it really changes is the stra­tegic para­met­ers of the decision. We are talk­ing of kin­et­ic war­fare here. When using, for example, drones to lead a charge, you risk much less than when you charge with sol­diers. Drones are just man­u­fac­tured items, easy to replace. You may nev­er lose the momentum if you can get it, which is a game changer stra­tegic­ally. You could ima­gine a battle where you drop a bunch of robots in to con­trol a bridge and they do it for years. They don’t fade or sleep. They just sit there, and nobody crosses the bridge unauthorised.

Support accurate information rooted in the scientific method.

Donate