2_reglementation
π Digital π Science and technology
What are the next challenges for AI?

AI Act: how Europe wants to regulate machines

Sophy Caulier, Independant journalist
On December 1st, 2021 |
3 min reading time
Winston Maxwell
Winston Maxwell
Director of Rights and Digital Studies at Télécom Paris (IP Paris)
Key takeaways
  • AI is not outside the law. Whether its RGPD for personal data, or sector-specific regulations in the health, finance, or automotive sectors, existing regulations already apply.
  • In Machine Learning (ML), algorithms are self-created and operate in a probabilistic manner. Their results are accurate most of the time, but risk of error is an unavoidable characteristic of this type of model.
  • A challenge for the future will be to surround these very powerful probabilistic systems with safeguards for tasks like image recognition.
  • Upcoming EU AI regulations in the form of the “AI Act” will require compliance testing and ‘CE’ marking for any high-risk AI systems put on the market in Europe.

How do you approach reg­u­la­to­ry issues in arti­fi­cial intel­li­gence (AI)?

Reg­u­la­to­ry issues need to align with the tech­ni­cal real­i­ty. At Télé­com Paris we adopt an inter­dis­ci­pli­nary approach through the Oper­a­tional AI Ethics pro­gramme which brings togeth­er six dis­ci­plines: applied math­e­mat­ics, sta­tis­tics, com­put­er sci­ence, eco­nom­ics, law, and soci­ol­o­gy. Inter­dis­ci­pli­nar­i­ty is excit­ing, but it takes work! We each speak dif­fer­ent lan­guages and must build bridges between our dif­fer­ent sci­en­tif­ic disciplines.

What is the sta­tus of AI reg­u­la­tion in Europe?

AI is not an unreg­u­lat­ed field. Exist­ing reg­u­la­tions already apply to AI, be it the RGPD for per­son­al data, or sec­tor-spe­cif­ic reg­u­la­tions in the field of health (med­ical devices), finance (trad­ing mod­els, sol­ven­cy), or auto­mo­tive for example.

So why does the Euro­pean AI Act pro­pose to add spe­cif­ic limitations?

AI soft­ware, and in par­tic­u­lar Machine Learn­ing (ML) soft­ware, pos­es new prob­lems. Tra­di­tion­al soft­ware – sym­bol­ic AI, some­times called “good old-fash­ioned AI” – is devel­oped from pre­cise spec­i­fi­ca­tions, with cer­tain and prov­able out­put data. They are deter­min­is­tic algo­rithms: input “a” plus input “b” will always lead to out­put “c”. If this is not the case, there is a bug. 

In ML, the algo­rithms cre­ate them­selves by learn­ing from large vol­umes of data and oper­ate prob­a­bilis­ti­cal­ly. Their results are accu­rate most of the time. As such, they can base their pre­dic­tions on irrel­e­vant cor­re­la­tions that they have learned from the train­ing data. The risk of error is an unavoid­able fea­ture of prob­a­bilis­tic ML mod­els, which rais­es new reg­u­la­to­ry issues, espe­cial­ly for high-risk AI sys­tems. Is it pos­si­ble to use a prob­a­bilis­tic algo­rithm in a crit­i­cal sys­tem like image recog­ni­tion in an autonomous car? More­over, ML algo­rithms are rel­a­tive­ly unintelligible.

The 2018 crash involv­ing the autonomous Uber car in Ari­zona is a per­fect illus­tra­tion of the prob­lem. The image recog­ni­tion sys­tem learned that a human is usu­al­ly cross­ing the road near a cross­walk. A pedes­tri­an was cross­ing the road with his bike away from the cross­walk, and the sys­tem clas­si­fied the image as a vehi­cle, not a pedes­tri­an, right up until the last sec­ond before the col­li­sion. Hence, the car did not brake in time and the pedes­tri­an was killed. In addi­tion, the dri­ver who was sup­posed to super­vise the sys­tem was inat­ten­tive (inat­ten­tion is a com­mon phe­nom­e­non called “automa­tion com­pla­cen­cy”). The chal­lenge for the future will be to sur­round these prob­a­bilis­tic sys­tems – which are very effi­cient for tasks like image recog­ni­tion – with safe­guards. Hybrid sys­tems, which com­bine ML and sym­bol­ic AI, are a promis­ing way forward.

How can we reg­u­late to address this issue?

The draft EU AI Reg­u­la­tion will require com­pli­ance test­ing and CE mark­ing for any high-risk AI sys­tem placed on the mar­ket in Europe. The first chal­lenge is to define what is meant by a high-risk AI sys­tem! At present, this would include soft­ware used by the police, for cred­it scor­ing, for review­ing uni­ver­si­ty or job appli­cants, soft­ware in cars, etc. The list will con­tin­ue to grow. Real-time facial recog­ni­tion used by the police for iden­ti­fi­ca­tion pur­pos­es will be sub­ject to spe­cif­ic con­straints, includ­ing inde­pen­dent test­ing, and the involve­ment of at least two human oper­a­tors before con­firm­ing a ‘match’.

For oth­er high-risk sys­tems, the draft reg­u­la­tion envis­ages com­pli­ance test­ing by the com­pa­ny itself. Each sys­tem will have to be sub­ject to a risk assess­ment and be accom­pa­nied by doc­u­men­ta­tion explain­ing the risks. The sys­tems will have to ensure effec­tive human con­trol. The oper­a­tor of the sys­tem should gen­er­ate event logs allow­ing for auditabil­i­ty of the sys­tem. For AI sys­tems inte­grat­ed into sys­tems already cov­ered by reg­u­la­tion (e.g. med­ical devices), the test­ing and com­pli­ance regime will be gov­erned by the sec­toral reg­u­la­tion. This avoids dupli­ca­tion in the regulation.

Why is there so much dis­trust of ML algo­rithms when risks are accept­ed in oth­er areas?

This mis­trust is not new. The Tri­cot report of 1975 – the report that led to the adop­tion of the French Data Pro­tec­tion Act in 1978 – already men­tioned the dis­trust of com­put­er sys­tems that reduce human beings to a series of sta­tis­ti­cal prob­a­bil­i­ties. By reduc­ing us to num­bers, such sys­tems deny our indi­vid­u­al­i­ty and human­i­ty. We are used to sta­tis­ti­cal pro­fil­ing when it comes to receiv­ing an adver­tise­ment on Face­book or a music rec­om­men­da­tion on Deez­er. But for more seri­ous deci­sions – a hir­ing deci­sion, admis­sion to a uni­ver­si­ty, trig­ger­ing a tax audit, or get­ting a loan – being judged sole­ly on a sta­tis­ti­cal pro­file is prob­lem­at­ic, espe­cial­ly when the algo­rithm that cre­ates the pro­file is unintelligible!

The algo­rithm should there­fore pro­vide sta­tis­ti­cal insight into the issue, but nev­er replace the dis­cern­ment and nuance of a human deci­sion-mak­er. But beware, human short­com­ings should not be min­imised either – in the US it has been shown that judges make heav­ier prison deci­sions before lunch when they are hun­gry. Algo­rithms can help com­pen­sate for these human biases.

Our world explained with science. Every week, in your inbox.

Get the newsletter