2_fraudeFiscale
π Digital π Society
How can artificial intelligence be regulated?

AI Act: how Europe wants to regulate machines

with Sophy Caulier, Independant journalist
On December 1st, 2021 |
3min reading time
Winston Maxwell
Winston Maxwell
Director of Law and Digital Studies at Télécom Paris (IP Paris)
Key takeaways
  • AI is not outside the law. Whether its RGPD for personal data, or sector-specific regulations in the health, finance, or automotive sectors, existing regulations already apply.
  • In Machine Learning (ML), algorithms are self-created and operate in a probabilistic manner. Their results are accurate most of the time, but risk of error is an unavoidable characteristic of this type of model.
  • A challenge for the future will be to surround these very powerful probabilistic systems with safeguards for tasks like image recognition.
  • Upcoming EU AI regulations in the form of the “AI Act” will require compliance testing and ‘CE’ marking for any high-risk AI systems put on the market in Europe.

How do you approach reg­u­lat­ory issues in arti­fi­cial intel­li­gence (AI)?

Reg­u­lat­ory issues need to align with the tech­nic­al real­ity. At Télé­com Par­is we adopt an inter­dis­cip­lin­ary approach through the Oper­a­tion­al AI Eth­ics pro­gramme which brings togeth­er six dis­cip­lines: applied math­em­at­ics, stat­ist­ics, com­puter sci­ence, eco­nom­ics, law, and soci­ology. Inter­dis­cip­lin­ar­ity is excit­ing, but it takes work! We each speak dif­fer­ent lan­guages and must build bridges between our dif­fer­ent sci­entif­ic disciplines.

What is the status of AI reg­u­la­tion in Europe?

AI is not an unreg­u­lated field. Exist­ing reg­u­la­tions already apply to AI, be it the RGPD for per­son­al data, or sec­tor-spe­cif­ic reg­u­la­tions in the field of health (med­ic­al devices), fin­ance (trad­ing mod­els, solvency), or auto­mot­ive for example.

So why does the European AI Act pro­pose to add spe­cif­ic limitations?

AI soft­ware, and in par­tic­u­lar Machine Learn­ing (ML) soft­ware, poses new prob­lems. Tra­di­tion­al soft­ware – sym­bol­ic AI, some­times called “good old-fash­ioned AI” – is developed from pre­cise spe­cific­a­tions, with cer­tain and prov­able out­put data. They are determ­in­ist­ic algorithms: input “a” plus input “b” will always lead to out­put “c”. If this is not the case, there is a bug. 

In ML, the algorithms cre­ate them­selves by learn­ing from large volumes of data and oper­ate prob­ab­il­ist­ic­ally. Their res­ults are accur­ate most of the time. As such, they can base their pre­dic­tions on irrel­ev­ant cor­rel­a­tions that they have learned from the train­ing data. The risk of error is an unavoid­able fea­ture of prob­ab­il­ist­ic ML mod­els, which raises new reg­u­lat­ory issues, espe­cially for high-risk AI sys­tems. Is it pos­sible to use a prob­ab­il­ist­ic algorithm in a crit­ic­al sys­tem like image recog­ni­tion in an autonom­ous car? Moreover, ML algorithms are rel­at­ively unintelligible.

The 2018 crash involving the autonom­ous Uber car in Ari­zona is a per­fect illus­tra­tion of the prob­lem. The image recog­ni­tion sys­tem learned that a human is usu­ally cross­ing the road near a cross­walk. A ped­es­tri­an was cross­ing the road with his bike away from the cross­walk, and the sys­tem clas­si­fied the image as a vehicle, not a ped­es­tri­an, right up until the last second before the col­li­sion. Hence, the car did not brake in time and the ped­es­tri­an was killed. In addi­tion, the driver who was sup­posed to super­vise the sys­tem was inat­tent­ive (inat­ten­tion is a com­mon phe­nomen­on called “auto­ma­tion com­pla­cency”). The chal­lenge for the future will be to sur­round these prob­ab­il­ist­ic sys­tems – which are very effi­cient for tasks like image recog­ni­tion – with safe­guards. Hybrid sys­tems, which com­bine ML and sym­bol­ic AI, are a prom­ising way forward.

How can we reg­u­late to address this issue?

The draft EU AI Reg­u­la­tion will require com­pli­ance test­ing and CE mark­ing for any high-risk AI sys­tem placed on the mar­ket in Europe. The first chal­lenge is to define what is meant by a high-risk AI sys­tem! At present, this would include soft­ware used by the police, for cred­it scor­ing, for review­ing uni­ver­sity or job applic­ants, soft­ware in cars, etc. The list will con­tin­ue to grow. Real-time facial recog­ni­tion used by the police for iden­ti­fic­a­tion pur­poses will be sub­ject to spe­cif­ic con­straints, includ­ing inde­pend­ent test­ing, and the involve­ment of at least two human oper­at­ors before con­firm­ing a ‘match’.

For oth­er high-risk sys­tems, the draft reg­u­la­tion envis­ages com­pli­ance test­ing by the com­pany itself. Each sys­tem will have to be sub­ject to a risk assess­ment and be accom­pan­ied by doc­u­ment­a­tion explain­ing the risks. The sys­tems will have to ensure effect­ive human con­trol. The oper­at­or of the sys­tem should gen­er­ate event logs allow­ing for audit­ab­il­ity of the sys­tem. For AI sys­tems integ­rated into sys­tems already covered by reg­u­la­tion (e.g. med­ic­al devices), the test­ing and com­pli­ance regime will be gov­erned by the sec­tor­al reg­u­la­tion. This avoids duplic­a­tion in the regulation.

Why is there so much dis­trust of ML algorithms when risks are accep­ted in oth­er areas?

This mis­trust is not new. The Tricot report of 1975 – the report that led to the adop­tion of the French Data Pro­tec­tion Act in 1978 – already men­tioned the dis­trust of com­puter sys­tems that reduce human beings to a series of stat­ist­ic­al prob­ab­il­it­ies. By redu­cing us to num­bers, such sys­tems deny our indi­vidu­al­ity and human­ity. We are used to stat­ist­ic­al pro­fil­ing when it comes to receiv­ing an advert­ise­ment on Face­book or a music recom­mend­a­tion on Deez­er. But for more ser­i­ous decisions – a hir­ing decision, admis­sion to a uni­ver­sity, trig­ger­ing a tax audit, or get­ting a loan – being judged solely on a stat­ist­ic­al pro­file is prob­lem­at­ic, espe­cially when the algorithm that cre­ates the pro­file is unintelligible!

The algorithm should there­fore provide stat­ist­ic­al insight into the issue, but nev­er replace the dis­cern­ment and nuance of a human decision-maker. But beware, human short­com­ings should not be min­im­ised either – in the US it has been shown that judges make heav­ier pris­on decisions before lunch when they are hungry. Algorithms can help com­pensate for these human biases.

Support accurate information rooted in the scientific method.

Donate