IA enjeux
π Digital π Science and technology
What are the next challenges for AI?

When algorithms replace humans, what is at stake ? 

with Véronique Steyer, Associate Professor in Innovation Management at École Polytechnique (IP Paris) and Milie Taing, Founder and CEO of Lili.ai
On March 22nd, 2023 |
4 min reading time
Véronique Steyer
Véronique Steyer
Associate Professor in Innovation Management at École Polytechnique (IP Paris)
Milie Taing
Milie Taing
Founder and CEO of Lili.ai
Key takeaways
  • Artificial intelligence (AI) is increasingly involved in our daily decisions but raises practical and ethical issues.
  • A distinction must be made between the notion of interpretability of AI (its functioning) and the notion of accountability (the degree of responsibility of the creator/user).
  • A draft European regulation should lead in 2023 to a classification of AIs according to different levels of risk.
  • AI can free humans from time-consuming and repetitive tasks and allow them to focus on more important tasks.
  • It is in France's interest to invest in this type of AI for very large projects because it has access to colossal amounts of data to process.

Com­pu­ter sys­tems are increa­sin­gly invol­ved in eve­ry­day deci­sions, espe­cial­ly arti­fi­cial intel­li­gence (AI), which can take over func­tions pre­vious­ly per­for­med by humans. But how can we trust them if we do not know how they work ? And what about when this sys­tem is cal­led upon to make deci­sions that could put our lives at stake ? 

Regulating AI 

Véro­nique Steyer, lec­tu­rer in the Mana­ge­ment of Inno­va­tion and Entre­pre­neur­ship (MIE) depart­ment at École Poly­tech­nique (IP Paris), has been wor­king on the ques­tion of the expli­ca­bi­li­ty of AI for seve­ral years. Accor­ding to her, it is impor­tant to dis­tin­guish the notion of inter­pre­ta­bi­li­ty – which consists of unders­tan­ding how an algo­rithm works, to improve its robust­ness and diag­nose its flaws – from the notion of accoun­ta­bi­li­ty. This raises the ques­tion : in the event of mate­rial or phy­si­cal damage cau­sed by an arti­fi­cial intel­li­gence, what is the degree of res­pon­si­bi­li­ty of the per­son or com­pa­ny that desi­gned or uses this AI ? 

Howe­ver, when they exist, AI explai­na­bi­li­ty tools are gene­ral­ly deve­lo­ped with an inter­pre­ta­bi­li­ty logic, and not with an accoun­ta­bi­li­ty logic. To put it plain­ly, they allow us to observe what is hap­pe­ning in the sys­tem, without neces­sa­ri­ly trying to unders­tand what is going on, and which deci­sions are taken accor­ding to which cri­te­ria. They can thus be good indi­ca­tors of the degree of per­for­mance of the AI, without assu­ring the user of the rele­vance of the deci­sions taken. 

For her, it is the­re­fore neces­sa­ry to pro­vide a regu­la­to­ry fra­me­work for these AIs. In France, the public health code alrea­dy sti­pu­lates that “the desi­gners of an algo­rith­mic treat­ment […] ensure that its ope­ra­tion is expli­cable for users” (law n° 2021–1017 of 2nd August 2021). With this text, the legis­la­tor was aiming more spe­ci­fi­cal­ly at AIs used to diag­nose cer­tain diseases, inclu­ding can­cers. But it is still neces­sa­ry to train users – in this case health pro­fes­sio­nals – not only in AI, but also in the func­tio­ning and inter­pre­ta­tion of AI expla­na­tion tools… How else will we know if the diag­no­sis is made accor­ding to the right criteria ? 

At the Euro­pean level, a draft regu­la­tion is under­way in 2023, which should lead to the clas­si­fi­ca­tion of AIs accor­ding to dif­ferent levels of risk, and to require a cer­ti­fi­ca­tion that would gua­ran­tee various degrees of explai­na­bi­li­ty. But who should deve­lop these tools, and how can we prevent the GAFAs from control­ling them ? “We are far from having ans­we­red all these thor­ny ques­tions, and many com­pa­nies that deve­lop AI are still una­ware of the notion of expli­ca­bi­li­ty, » notes Véro­nique Steyer. 

Freeing up time-consuming tasks

Meanw­hile, increa­sin­gly power­ful AIs are being deve­lo­ped in more and more diverse sec­tors of acti­vi­ty. An entre­pre­neur in AI, Milie Taing foun­ded the start-up Lili​.ai in 2016 on the Poly­tech­nique cam­pus. She first spent eight years as a pro­ject mana­ger, spe­cia­li­sing in cost control, at SNC Lava­lin, the Cana­dian lea­der in large pro­jects. It was there that she had to trace the his­to­ry of seve­ral major construc­tion pro­jects that had fal­len far behind schedule. 

To docu­ment com­plaints, it was neces­sa­ry to dig through up to 18 years of very hete­ro­ge­neous data (email exchanges, attach­ments, mee­ting minutes, etc.) and to iden­ti­fy when the errors explai­ning the delay in these pro­jects had occur­red. But it is impos­sible for humans to ana­lyse data scat­te­red over thou­sands of mail­boxes and decades. In large construc­tion pro­jects, this docu­men­ta­tion chaos can lead to very hea­vy penal­ties, and some­times even ban­krupt­cy. Milie Taing the­re­fore had the idea of tea­ming up with data scien­tists and deve­lo­pers to deve­lop an arti­fi­cial intel­li­gence soft­ware whose role is to car­ry out docu­men­ta­ry archaeology.

“To explore the past of a pro­ject, our algo­rithms open all the docu­ments rela­ted to that pro­ject one by one. Then they extract all the sen­tences and key­words and auto­ma­ti­cal­ly tag them with hash­tags, a bit like Twit­ter,” explains Milie Taing. These hash­tags ulti­ma­te­ly make it pos­sible to conduct lite­ra­ture searches effi­cient­ly. To avoid pro­blems in the case of an ongoing pro­ject, we have model­led a hun­dred or so recur­ring pro­blems that could lead to a com­plaint and pos­sible penalties.

Lili.ai’s soft­ware is alrea­dy used by major accounts such as the Socié­té du Grand Paris, EDF and Ora­no (nuclear power plant mana­ge­ment). And accor­ding to Milie Taing, it does not threa­ten the jobs of pro­ject mana­gers. “In this case, AI assists in the mana­ge­ment of pro­blems, which makes it pos­sible to iden­ti­fy mal­func­tions before it’s too late,” she says. It aims to free humans from time-consu­ming and repe­ti­tive tasks, allo­wing them to concen­trate on higher value-added work. 

AI aims to free humans from time-consu­ming tasks, allo­wing them to concen­trate on higher value-added work.

But doesn’t this AI risk point out the res­pon­si­bi­li­ty, or even the guilt, of cer­tain people in the fai­lure of a pro­ject ? “In fact, employees are atta­ched to their pro­jects and are pre­pa­red to give away their e‑mails to reco­ver the costs and mar­gin that would have been lost if the work was delayed. Although, legal­ly, staff e‑mails belong to the com­pa­ny, we have inclu­ded very sophis­ti­ca­ted fil­te­ring func­tions in our soft­ware that give employees control over what they do or do not accept to export in the Lili solu­tion,” she states. 

France ahead of the game

Accor­ding to Milie Taing, it is in Fran­ce’s inter­est to invest in this type of AI, as it has some of the best inter­na­tio­nal exper­tise in the exe­cu­tion of very large pro­jects, and the­re­fore has access to colos­sal amounts of data. On the other hand, it will be less effi­cient than the Asians, for example, in other appli­ca­tions, such as facial recog­ni­tion, which moreo­ver goes against a cer­tain French ethic. 

“All tech­no­lo­gy car­ries a script, with what it does or does not allow us to do, the role it gives to humans, and the values it car­ries,” Véro­nique Steyer points out. “For example, in the 1950s, in Cali­for­nia, a road lea­ding to a beach was built, and to prevent the beach from being inva­ded by a popu­la­tion of too modest ori­gin, the bridges span­ning the road were set very low, which pre­ven­ted buses from pas­sing. So, I think it’s very impor­tant to unders­tand not only how a sys­tem works, but also what socie­tal choices are embed­ded in an AI sys­tem in a total­ly tacit way that we don’t see.”

Cur­rent­ly, the most wides­pread AIs are chat­bots, which can­not be said to threa­ten the human spe­cies. But by beco­ming accus­to­med to the per­for­mance of these chat­bots, we could tomor­row neglect to ques­tion the mecha­nisms and objec­tives of more sophis­ti­ca­ted AI. 

Marina Julienne

Support accurate information rooted in the scientific method.

Donate