IA enjeux
π Digital π Science and technology
What are the next challenges for AI?

When algorithms replace humans, what is at stake? 

with Véronique Steyer, Associate Professor in Innovation Management at École Polytechnique (IP Paris) and Milie Taing, Founder and CEO of Lili.ai
On March 22nd, 2023 |
4 min reading time
Véronique Steyer
Véronique Steyer
Associate Professor in Innovation Management at École Polytechnique (IP Paris)
Milie Taing
Milie Taing
Founder and CEO of Lili.ai
Key takeaways
  • Artificial intelligence (AI) is increasingly involved in our daily decisions but raises practical and ethical issues.
  • A distinction must be made between the notion of interpretability of AI (its functioning) and the notion of accountability (the degree of responsibility of the creator/user).
  • A draft European regulation should lead in 2023 to a classification of AIs according to different levels of risk.
  • AI can free humans from time-consuming and repetitive tasks and allow them to focus on more important tasks.
  • It is in France's interest to invest in this type of AI for very large projects because it has access to colossal amounts of data to process.

Com­puter sys­tems are increas­ingly involved in every­day decisions, espe­cially arti­fi­cial intel­li­gence (AI), which can take over func­tions pre­vi­ously per­formed by humans. But how can we trust them if we do not know how they work? And what about when this sys­tem is called upon to make decisions that could put our lives at stake? 

Regulating AI 

Véro­nique Stey­er, lec­turer in the Man­age­ment of Innov­a­tion and Entre­pren­eur­ship (MIE) depart­ment at École Poly­tech­nique (IP Par­is), has been work­ing on the ques­tion of the explic­ab­il­ity of AI for sev­er­al years. Accord­ing to her, it is import­ant to dis­tin­guish the notion of inter­pretab­il­ity – which con­sists of under­stand­ing how an algorithm works, to improve its robust­ness and dia­gnose its flaws – from the notion of account­ab­il­ity. This raises the ques­tion: in the event of mater­i­al or phys­ic­al dam­age caused by an arti­fi­cial intel­li­gence, what is the degree of respons­ib­il­ity of the per­son or com­pany that designed or uses this AI? 

How­ever, when they exist, AI explain­ab­il­ity tools are gen­er­ally developed with an inter­pretab­il­ity logic, and not with an account­ab­il­ity logic. To put it plainly, they allow us to observe what is hap­pen­ing in the sys­tem, without neces­sar­ily try­ing to under­stand what is going on, and which decisions are taken accord­ing to which cri­ter­ia. They can thus be good indic­at­ors of the degree of per­form­ance of the AI, without assur­ing the user of the rel­ev­ance of the decisions taken. 

For her, it is there­fore neces­sary to provide a reg­u­lat­ory frame­work for these AIs. In France, the pub­lic health code already stip­u­lates that “the design­ers of an algorithmic treat­ment […] ensure that its oper­a­tion is explic­able for users” (law n° 2021–1017 of 2nd August 2021). With this text, the legis­lat­or was aim­ing more spe­cific­ally at AIs used to dia­gnose cer­tain dis­eases, includ­ing can­cers. But it is still neces­sary to train users – in this case health pro­fes­sion­als – not only in AI, but also in the func­tion­ing and inter­pret­a­tion of AI explan­a­tion tools… How else will we know if the dia­gnos­is is made accord­ing to the right criteria? 

At the European level, a draft reg­u­la­tion is under­way in 2023, which should lead to the clas­si­fic­a­tion of AIs accord­ing to dif­fer­ent levels of risk, and to require a cer­ti­fic­a­tion that would guar­an­tee vari­ous degrees of explain­ab­il­ity. But who should devel­op these tools, and how can we pre­vent the GAFAs from con­trolling them? “We are far from hav­ing answered all these thorny ques­tions, and many com­pan­ies that devel­op AI are still unaware of the notion of explic­ab­il­ity, » notes Véro­nique Steyer. 

Freeing up time-consuming tasks

Mean­while, increas­ingly power­ful AIs are being developed in more and more diverse sec­tors of activ­ity. An entre­pren­eur in AI, Milie Taing foun­ded the start-up Lili​.ai in 2016 on the Poly­tech­nique cam­pus. She first spent eight years as a pro­ject man­ager, spe­cial­ising in cost con­trol, at SNC Laval­in, the Cana­dian lead­er in large pro­jects. It was there that she had to trace the his­tory of sev­er­al major con­struc­tion pro­jects that had fallen far behind schedule. 

To doc­u­ment com­plaints, it was neces­sary to dig through up to 18 years of very het­ero­gen­eous data (email exchanges, attach­ments, meet­ing minutes, etc.) and to identi­fy when the errors explain­ing the delay in these pro­jects had occurred. But it is impossible for humans to ana­lyse data scattered over thou­sands of mail­boxes and dec­ades. In large con­struc­tion pro­jects, this doc­u­ment­a­tion chaos can lead to very heavy pen­al­ties, and some­times even bank­ruptcy. Milie Taing there­fore had the idea of team­ing up with data sci­ent­ists and developers to devel­op an arti­fi­cial intel­li­gence soft­ware whose role is to carry out doc­u­ment­ary archaeology.

“To explore the past of a pro­ject, our algorithms open all the doc­u­ments related to that pro­ject one by one. Then they extract all the sen­tences and keywords and auto­mat­ic­ally tag them with hasht­ags, a bit like Twit­ter,” explains Milie Taing. These hasht­ags ulti­mately make it pos­sible to con­duct lit­er­at­ure searches effi­ciently. To avoid prob­lems in the case of an ongo­ing pro­ject, we have mod­elled a hun­dred or so recur­ring prob­lems that could lead to a com­plaint and pos­sible penalties.

Lili.ai’s soft­ware is already used by major accounts such as the Société du Grand Par­is, EDF and Orano (nuc­le­ar power plant man­age­ment). And accord­ing to Milie Taing, it does not threaten the jobs of pro­ject man­agers. “In this case, AI assists in the man­age­ment of prob­lems, which makes it pos­sible to identi­fy mal­func­tions before it’s too late,” she says. It aims to free humans from time-con­sum­ing and repet­it­ive tasks, allow­ing them to con­cen­trate on high­er value-added work. 

AI aims to free humans from time-con­sum­ing tasks, allow­ing them to con­cen­trate on high­er value-added work.

But doesn’t this AI risk point out the respons­ib­il­ity, or even the guilt, of cer­tain people in the fail­ure of a pro­ject? “In fact, employ­ees are attached to their pro­jects and are pre­pared to give away their e‑mails to recov­er the costs and mar­gin that would have been lost if the work was delayed. Although, leg­ally, staff e‑mails belong to the com­pany, we have included very soph­ist­ic­ated fil­ter­ing func­tions in our soft­ware that give employ­ees con­trol over what they do or do not accept to export in the Lili solu­tion,” she states. 

France ahead of the game

Accord­ing to Milie Taing, it is in France’s interest to invest in this type of AI, as it has some of the best inter­na­tion­al expert­ise in the exe­cu­tion of very large pro­jects, and there­fore has access to colossal amounts of data. On the oth­er hand, it will be less effi­cient than the Asi­ans, for example, in oth­er applic­a­tions, such as facial recog­ni­tion, which moreover goes against a cer­tain French ethic. 

“All tech­no­logy car­ries a script, with what it does or does not allow us to do, the role it gives to humans, and the val­ues it car­ries,” Véro­nique Stey­er points out. “For example, in the 1950s, in Cali­for­nia, a road lead­ing to a beach was built, and to pre­vent the beach from being invaded by a pop­u­la­tion of too mod­est ori­gin, the bridges span­ning the road were set very low, which pre­ven­ted buses from passing. So, I think it’s very import­ant to under­stand not only how a sys­tem works, but also what soci­et­al choices are embed­ded in an AI sys­tem in a totally tacit way that we don’t see.”

Cur­rently, the most wide­spread AIs are chat­bots, which can­not be said to threaten the human spe­cies. But by becom­ing accus­tomed to the per­form­ance of these chat­bots, we could tomor­row neg­lect to ques­tion the mech­an­isms and object­ives of more soph­ist­ic­ated AI. 

Marina Julienne

Support accurate information rooted in the scientific method.

Donate