IA enjeux
π Digital π Science and technology
What are the next challenges for AI?

When algorithms replace humans, what is at stake? 

Véronique Steyer, Associate Professor in Innovation Management at École Polytechnique (IP Paris) and Milie Taing, Founder and CEO of Lili.ai
On March 22nd, 2023 |
4 min reading time
Véronique Steyer
Véronique Steyer
Associate Professor in Innovation Management at École Polytechnique (IP Paris)
Milie Taing
Milie Taing
Founder and CEO of Lili.ai
Key takeaways
  • Artificial intelligence (AI) is increasingly involved in our daily decisions but raises practical and ethical issues.
  • A distinction must be made between the notion of interpretability of AI (its functioning) and the notion of accountability (the degree of responsibility of the creator/user).
  • A draft European regulation should lead in 2023 to a classification of AIs according to different levels of risk.
  • AI can free humans from time-consuming and repetitive tasks and allow them to focus on more important tasks.
  • It is in France's interest to invest in this type of AI for very large projects because it has access to colossal amounts of data to process.

Com­put­er sys­tems are increas­ing­ly involved in every­day deci­sions, espe­cial­ly arti­fi­cial intel­li­gence (AI), which can take over func­tions pre­vi­ous­ly per­formed by humans. But how can we trust them if we do not know how they work? And what about when this sys­tem is called upon to make deci­sions that could put our lives at stake? 

Regulating AI 

Véronique Stey­er, lec­tur­er in the Man­age­ment of Inno­va­tion and Entre­pre­neur­ship (MIE) depart­ment at École Poly­tech­nique (IP Paris), has been work­ing on the ques­tion of the explic­a­bil­i­ty of AI for sev­er­al years. Accord­ing to her, it is impor­tant to dis­tin­guish the notion of inter­pretabil­i­ty – which con­sists of under­stand­ing how an algo­rithm works, to improve its robust­ness and diag­nose its flaws – from the notion of account­abil­i­ty. This rais­es the ques­tion: in the event of mate­r­i­al or phys­i­cal dam­age caused by an arti­fi­cial intel­li­gence, what is the degree of respon­si­bil­i­ty of the per­son or com­pa­ny that designed or uses this AI? 

How­ev­er, when they exist, AI explain­abil­i­ty tools are gen­er­al­ly devel­oped with an inter­pretabil­i­ty log­ic, and not with an account­abil­i­ty log­ic. To put it plain­ly, they allow us to observe what is hap­pen­ing in the sys­tem, with­out nec­es­sar­i­ly try­ing to under­stand what is going on, and which deci­sions are tak­en accord­ing to which cri­te­ria. They can thus be good indi­ca­tors of the degree of per­for­mance of the AI, with­out assur­ing the user of the rel­e­vance of the deci­sions taken. 

For her, it is there­fore nec­es­sary to pro­vide a reg­u­la­to­ry frame­work for these AIs. In France, the pub­lic health code already stip­u­lates that “the design­ers of an algo­rith­mic treat­ment […] ensure that its oper­a­tion is explic­a­ble for users” (law n° 2021–1017 of 2nd August 2021). With this text, the leg­is­la­tor was aim­ing more specif­i­cal­ly at AIs used to diag­nose cer­tain dis­eases, includ­ing can­cers. But it is still nec­es­sary to train users – in this case health pro­fes­sion­als – not only in AI, but also in the func­tion­ing and inter­pre­ta­tion of AI expla­na­tion tools… How else will we know if the diag­no­sis is made accord­ing to the right criteria? 

At the Euro­pean lev­el, a draft reg­u­la­tion is under­way in 2023, which should lead to the clas­si­fi­ca­tion of AIs accord­ing to dif­fer­ent lev­els of risk, and to require a cer­ti­fi­ca­tion that would guar­an­tee var­i­ous degrees of explain­abil­i­ty. But who should devel­op these tools, and how can we pre­vent the GAFAs from con­trol­ling them? “We are far from hav­ing answered all these thorny ques­tions, and many com­pa­nies that devel­op AI are still unaware of the notion of explic­a­bil­i­ty, » notes Véronique Steyer. 

Freeing up time-consuming tasks

Mean­while, increas­ing­ly pow­er­ful AIs are being devel­oped in more and more diverse sec­tors of activ­i­ty. An entre­pre­neur in AI, Milie Taing found­ed the start-up Lili​.ai in 2016 on the Poly­tech­nique cam­pus. She first spent eight years as a project man­ag­er, spe­cial­is­ing in cost con­trol, at SNC Lavalin, the Cana­di­an leader in large projects. It was there that she had to trace the his­to­ry of sev­er­al major con­struc­tion projects that had fall­en far behind schedule. 

To doc­u­ment com­plaints, it was nec­es­sary to dig through up to 18 years of very het­ero­ge­neous data (email exchanges, attach­ments, meet­ing min­utes, etc.) and to iden­ti­fy when the errors explain­ing the delay in these projects had occurred. But it is impos­si­ble for humans to analyse data scat­tered over thou­sands of mail­box­es and decades. In large con­struc­tion projects, this doc­u­men­ta­tion chaos can lead to very heavy penal­ties, and some­times even bank­rupt­cy. Milie Taing there­fore had the idea of team­ing up with data sci­en­tists and devel­op­ers to devel­op an arti­fi­cial intel­li­gence soft­ware whose role is to car­ry out doc­u­men­tary archaeology.

“To explore the past of a project, our algo­rithms open all the doc­u­ments relat­ed to that project one by one. Then they extract all the sen­tences and key­words and auto­mat­i­cal­ly tag them with hash­tags, a bit like Twit­ter,” explains Milie Taing. These hash­tags ulti­mate­ly make it pos­si­ble to con­duct lit­er­a­ture search­es effi­cient­ly. To avoid prob­lems in the case of an ongo­ing project, we have mod­elled a hun­dred or so recur­ring prob­lems that could lead to a com­plaint and pos­si­ble penalties.

Lili.ai’s soft­ware is already used by major accounts such as the Société du Grand Paris, EDF and Ora­no (nuclear pow­er plant man­age­ment). And accord­ing to Milie Taing, it does not threat­en the jobs of project man­agers. “In this case, AI assists in the man­age­ment of prob­lems, which makes it pos­si­ble to iden­ti­fy mal­func­tions before it’s too late,” she says. It aims to free humans from time-con­sum­ing and repet­i­tive tasks, allow­ing them to con­cen­trate on high­er val­ue-added work. 

AI aims to free humans from time-con­sum­ing tasks, allow­ing them to con­cen­trate on high­er val­ue-added work.

But doesn’t this AI risk point out the respon­si­bil­i­ty, or even the guilt, of cer­tain peo­ple in the fail­ure of a project? “In fact, employ­ees are attached to their projects and are pre­pared to give away their e‑mails to recov­er the costs and mar­gin that would have been lost if the work was delayed. Although, legal­ly, staff e‑mails belong to the com­pa­ny, we have includ­ed very sophis­ti­cat­ed fil­ter­ing func­tions in our soft­ware that give employ­ees con­trol over what they do or do not accept to export in the Lili solu­tion,” she states. 

France ahead of the game

Accord­ing to Milie Taing, it is in France’s inter­est to invest in this type of AI, as it has some of the best inter­na­tion­al exper­tise in the exe­cu­tion of very large projects, and there­fore has access to colos­sal amounts of data. On the oth­er hand, it will be less effi­cient than the Asians, for exam­ple, in oth­er appli­ca­tions, such as facial recog­ni­tion, which more­over goes against a cer­tain French ethic. 

“All tech­nol­o­gy car­ries a script, with what it does or does not allow us to do, the role it gives to humans, and the val­ues it car­ries,” Véronique Stey­er points out. “For exam­ple, in the 1950s, in Cal­i­for­nia, a road lead­ing to a beach was built, and to pre­vent the beach from being invad­ed by a pop­u­la­tion of too mod­est ori­gin, the bridges span­ning the road were set very low, which pre­vent­ed bus­es from pass­ing. So, I think it’s very impor­tant to under­stand not only how a sys­tem works, but also what soci­etal choic­es are embed­ded in an AI sys­tem in a total­ly tac­it way that we don’t see.”

Cur­rent­ly, the most wide­spread AIs are chat­bots, which can­not be said to threat­en the human species. But by becom­ing accus­tomed to the per­for­mance of these chat­bots, we could tomor­row neglect to ques­tion the mech­a­nisms and objec­tives of more sophis­ti­cat­ed AI. 

Marina Julienne

Our world explained with science. Every week, in your inbox.

Get the newsletter