0_robotsTueurs
π Science and technology π Geopolitics
Killer robots: should we be afraid?

Can we justify the rise in military robots ?

with Richard Robert, Journalist and Author
On November 9th, 2021 |
4min reading time
Alan Wagner
Alan Wagner
Assistant professor in the Department of Aerospace Engineering and research associate in the Rock Ethics Institute
Key takeaways
  • In theory, robotic soldiers don’t get emotional, or revengeful, or angry. But the possibility of an accident raises issues of responsibility and liability, which are of great importance in military matters.
  • Increased autonomy thanks to AI, as well as maximised lethality, raises a philosophical problem: is the prospect of human soldiers facing bloodless, incredibly efficient machines acceptable?
  • But future autonomous systems might be perfect at targeting, so such a “precise” war would be less bloody.
  • Advances in precision warfare might also drive a new kind of dissuasion.

Tech­no­lo­gies for mili­ta­ry robots have made signi­fi­cant pro­gress over the last two decades, rai­sing issues about using auto­no­mous robo­tic sol­diers for active enga­ge­ment in bat­tles. What are the ethi­cal concerns ?

Alan Wag­ner. There are pros and cons. First­ly, since robo­tic sol­diers don’t get emo­tio­nal, reven­ge­ful, or angry they would – in theo­ry – fol­low the rules of war very clo­se­ly. This could prevent some of the worst atro­ci­ties that have occur­red in war­time. In that sense, robots could poten­tial­ly be more ethi­cal than human sol­diers. Howe­ver, the coun­te­rar­gu­ment is that, cur­rent­ly, robo­tic sys­tems are gene­ral­ly not capable of dis­tin­gui­shing bet­ween civi­lians and sol­diers. As a result, there is a risk that robots would acci­den­tal­ly tar­get civi­lians. That being said, these two argu­ments are not mutual­ly exclusive.

The pos­si­bi­li­ty of an acci­dent raises ques­tions around res­pon­si­bi­li­ty and lia­bi­li­ty ; this is the core of the ethi­cal debate in its cur­rent ver­sion. One of our values when it comes to mili­ta­ry deci­sion-making is that a human is res­pon­sible for a decision.

The pos­si­bi­li­ty of an acci­dent is at the core of the cur­rent ethi­cal debate. 

But res­pon­si­bi­li­ty is an extre­me­ly dif­fi­cult notion when it comes to mili­ta­ry robots. If a com­man­der autho­rises an auto­no­mous sys­tem, is the com­man­der still res­pon­sible for its course of action ? If the sys­tem makes mis­takes, how long does the autho­ri­ty per­sist ? Over a fixed per­iod ? Or only regar­ding cer­tain actions ? These ques­tions need to be consi­de­red more stron­gly, but also codi­fied, to decide what the limi­ta­tions of these sys­tems are and to deter­mine their boun­da­ries with regard to ethics.

Defi­ning res­pon­si­bi­li­ty and autho­ri­ty is a legal point, one that could be dealt with based on a set of rules. But there is also a phi­lo­so­phi­cal pro­blem : is the pros­pect of flesh and bone sol­diers facing bloo­dless machines acceptable ?

It comes back to our values and belief sys­tems. The ques­tion is not just about how unfair it would be for a sol­dier to face some Ter­mi­na­tor-like uns­top­pable machine killer. And if the value of both your mili­ta­ry and socie­ty is such that only a human can decide to take ano­ther human’s life in a mili­ta­ry context, then that would pre­clude the use of auto­no­mous sys­tems for most bat­tles or other mili­ta­ry operations.

But deba­ting in such abso­lute terms is sim­pli­fying the ethi­cal ques­tion. You might have a value sys­tem which favors maxi­mi­sing the safe­ty of your sol­diers. In that case, you may require auto­no­mous robots in your mili­ta­ry. Values are often in conflict with one ano­ther and there might be a trade off. The prin­ci­pal value for most coun­tries is to not lose a war, because conse­quences are high, not just on the bat­tle­field but for socie­ty on the whole. This leads to a dif­fi­cult chal­lenge : if one coun­try is going to deve­lop auto­no­mous sys­tems that have no ethi­cal values, but give them a stra­te­gic advan­tage, are you requi­red to do so as well in order to not let them have that advantage ?

Conver­se­ly, there is also the ques­tion of legi­ti­ma­cy. If you win a bat­tle thanks to robots, will your adver­sa­ry accept your vic­to­ry ? Will you be able to real­ly make peace and put an end to the war ? This is a key ques­tion, though it goes unno­ti­ced in ethi­cal debates over mili­ta­ry robots. And unfor­tu­na­te­ly, we’re wal­king right into it. Consi­der the Uni­ted States use of drone war­fare in Iraq. Evi­dence shows that when sol­diers weren’t at risk, the num­ber of drone attacks by the Uni­ted States went up ; sug­ges­ting that when people are not at risk it could be easier to fos­ter wars, with more bat­tles. On the other hand, in the recent Arme­nian-Azer­bai­ja­ni war, the use of drones may have ended the war end more quickly.

In the past, mecha­ni­sa­tion of war­fare has made it more cost­ly and bloo­dier, before a rever­sal. Could it be the case with robots ?

It’s not clear whe­ther robots will make war­fare bloo­dier. They could make it less bloo­dy if the auto­no­mous sys­tems are well deve­lo­ped. Many years from now, auto­no­mous sys­tems could become per­fect at tar­ge­ting, com­ple­te­ly avoi­ding civi­lian casual­ties. Thou­sands of lives would be saved. The­re­fore, one has to be care­ful about the very notion of “killer robot.”

Thou­sands of lives could be saved, so one has to be care­ful about the very notion of killer robot.

We might not like pre­ci­sion gui­ded mis­siles, but they are a repla­ce­ment for car­pet bom­bing. The same hap­pe­ned in civi­lian indus­tries such as agri­cul­ture, where after one cen­tu­ry of mass using of fer­ti­li­sers, we are swit­ching to a pre­ci­sion model. “Sur­gi­cal strikes,” an expres­sion used in the 1990s, was chal­len­ged as just ano­ther public rela­tions mot­to. But the under­lying trend, which is quite consistent with our value sys­tems, is that we kept deve­lo­ping tech­no­lo­gies that would mini­mise civi­lian casual­ties. The 90s were the begin­ning of pre­ci­sion war­fare, with most­ly pre­ci­sion gui­ded mis­siles. Things have advan­ced : we have pre­ci­sion recon­nais­sance and capa­ci­ties for pre­ci­sion assas­si­na­tion, with long dis­tance guns able to kill just one per­son in a car.

It is a dif­fi­cult trade off to know whe­ther we should have these tech­no­lo­gies ver­sus do we have the wars that might result without them. The slip­pe­ry slope argu­ment says it might become a bat­tle of who controls these tech­no­lo­gies and the engi­neers able to deve­lop them. But ano­ther argu­ment is that if heads of State and other key deci­sion makers can be tar­ge­ted, pre­ci­sion war­fare can per­tain to the same dis­sua­sion logic as nuclear bombs, invi­ting all sides to show restraint.

Does the pros­pect of arti­fi­cial intel­li­gence alter these considerations ?

The way arti­fi­cial intel­li­gence and robo­tics relate is that the robot is the machine, inclu­ding the sen­sors, the actua­tors, and the phy­si­cal sys­tem. The arti­fi­cial intel­li­gence (AI) is the brain that makes the machine do things. They are high­ly connec­ted : the smar­ter the sys­tem, the more capable it is. But AI is a vast field, encom­pas­sing eve­ry­thing from com­pu­ter vision and per­cep­tion to intel­li­gent deci­sion-making to intel­li­gent move­ment. All these things could go into robo­tic sys­tems and be used to make them more capable and less prone to flaws and errors. Here AI is enabling pre­ci­sion, which rein­forces the argu­ments above.

Can AI replace people in this context ? Again, the ans­wer is not simple. AI may replace the per­son for some deci­sions, non-lethal ones, or all deci­sions, or just some of the time, but not all of the time. We are back to the legal boun­da­ries and lia­bi­li­ty issues.

What it real­ly changes is the stra­te­gic para­me­ters of the deci­sion. We are tal­king of kine­tic war­fare here. When using, for example, drones to lead a charge, you risk much less than when you charge with sol­diers. Drones are just manu­fac­tu­red items, easy to replace. You may never lose the momen­tum if you can get it, which is a game chan­ger stra­te­gi­cal­ly. You could ima­gine a bat­tle where you drop a bunch of robots in to control a bridge and they do it for years. They don’t fade or sleep. They just sit there, and nobo­dy crosses the bridge unauthorised.

Support accurate information rooted in the scientific method.

Donate