2_fraudeFiscale
π Digital π Society
How can artificial intelligence be regulated?

AI Act : how Europe wants to regulate machines

with Sophy Caulier, Independant journalist
On December 1st, 2021 |
3min reading time
Winston Maxwell
Winston Maxwell
Director of Law and Digital Studies at Télécom Paris (IP Paris)
Key takeaways
  • AI is not outside the law. Whether its RGPD for personal data, or sector-specific regulations in the health, finance, or automotive sectors, existing regulations already apply.
  • In Machine Learning (ML), algorithms are self-created and operate in a probabilistic manner. Their results are accurate most of the time, but risk of error is an unavoidable characteristic of this type of model.
  • A challenge for the future will be to surround these very powerful probabilistic systems with safeguards for tasks like image recognition.
  • Upcoming EU AI regulations in the form of the “AI Act” will require compliance testing and ‘CE’ marking for any high-risk AI systems put on the market in Europe.

How do you approach regu­la­to­ry issues in arti­fi­cial intel­li­gence (AI)?

Regu­la­to­ry issues need to ali­gn with the tech­ni­cal rea­li­ty. At Télé­com Paris we adopt an inter­dis­ci­pli­na­ry approach through the Ope­ra­tio­nal AI Ethics pro­gramme which brings toge­ther six dis­ci­plines : applied mathe­ma­tics, sta­tis­tics, com­pu­ter science, eco­no­mics, law, and socio­lo­gy. Inter­dis­ci­pli­na­ri­ty is exci­ting, but it takes work ! We each speak dif­ferent lan­guages and must build bridges bet­ween our dif­ferent scien­ti­fic disciplines.

What is the sta­tus of AI regu­la­tion in Europe ?

AI is not an unre­gu­la­ted field. Exis­ting regu­la­tions alrea­dy apply to AI, be it the RGPD for per­so­nal data, or sec­tor-spe­ci­fic regu­la­tions in the field of health (medi­cal devices), finance (tra­ding models, sol­ven­cy), or auto­mo­tive for example.

So why does the Euro­pean AI Act pro­pose to add spe­ci­fic limitations ?

AI soft­ware, and in par­ti­cu­lar Machine Lear­ning (ML) soft­ware, poses new pro­blems. Tra­di­tio­nal soft­ware – sym­bo­lic AI, some­times cal­led “good old-fashio­ned AI” – is deve­lo­ped from pre­cise spe­ci­fi­ca­tions, with cer­tain and pro­vable out­put data. They are deter­mi­nis­tic algo­rithms : input “a” plus input “b” will always lead to out­put “c”. If this is not the case, there is a bug. 

In ML, the algo­rithms create them­selves by lear­ning from large volumes of data and ope­rate pro­ba­bi­lis­ti­cal­ly. Their results are accu­rate most of the time. As such, they can base their pre­dic­tions on irre­le­vant cor­re­la­tions that they have lear­ned from the trai­ning data. The risk of error is an una­voi­dable fea­ture of pro­ba­bi­lis­tic ML models, which raises new regu­la­to­ry issues, espe­cial­ly for high-risk AI sys­tems. Is it pos­sible to use a pro­ba­bi­lis­tic algo­rithm in a cri­ti­cal sys­tem like image recog­ni­tion in an auto­no­mous car ? Moreo­ver, ML algo­rithms are rela­ti­ve­ly unintelligible.

The 2018 crash invol­ving the auto­no­mous Uber car in Ari­zo­na is a per­fect illus­tra­tion of the pro­blem. The image recog­ni­tion sys­tem lear­ned that a human is usual­ly cros­sing the road near a cross­walk. A pedes­trian was cros­sing the road with his bike away from the cross­walk, and the sys­tem clas­si­fied the image as a vehicle, not a pedes­trian, right up until the last second before the col­li­sion. Hence, the car did not brake in time and the pedes­trian was killed. In addi­tion, the dri­ver who was sup­po­sed to super­vise the sys­tem was inat­ten­tive (inat­ten­tion is a com­mon phe­no­me­non cal­led “auto­ma­tion com­pla­cen­cy”). The chal­lenge for the future will be to sur­round these pro­ba­bi­lis­tic sys­tems – which are very effi­cient for tasks like image recog­ni­tion – with safe­guards. Hybrid sys­tems, which com­bine ML and sym­bo­lic AI, are a pro­mi­sing way forward.

How can we regu­late to address this issue ?

The draft EU AI Regu­la­tion will require com­pliance tes­ting and CE mar­king for any high-risk AI sys­tem pla­ced on the mar­ket in Europe. The first chal­lenge is to define what is meant by a high-risk AI sys­tem ! At present, this would include soft­ware used by the police, for cre­dit sco­ring, for revie­wing uni­ver­si­ty or job appli­cants, soft­ware in cars, etc. The list will conti­nue to grow. Real-time facial recog­ni­tion used by the police for iden­ti­fi­ca­tion pur­poses will be sub­ject to spe­ci­fic constraints, inclu­ding inde­pendent tes­ting, and the invol­ve­ment of at least two human ope­ra­tors before confir­ming a ‘match’.

For other high-risk sys­tems, the draft regu­la­tion envi­sages com­pliance tes­ting by the com­pa­ny itself. Each sys­tem will have to be sub­ject to a risk assess­ment and be accom­pa­nied by docu­men­ta­tion explai­ning the risks. The sys­tems will have to ensure effec­tive human control. The ope­ra­tor of the sys­tem should gene­rate event logs allo­wing for audi­ta­bi­li­ty of the sys­tem. For AI sys­tems inte­gra­ted into sys­tems alrea­dy cove­red by regu­la­tion (e.g. medi­cal devices), the tes­ting and com­pliance regime will be gover­ned by the sec­to­ral regu­la­tion. This avoids dupli­ca­tion in the regulation.

Why is there so much dis­trust of ML algo­rithms when risks are accep­ted in other areas ?

This mis­trust is not new. The Tri­cot report of 1975 – the report that led to the adop­tion of the French Data Pro­tec­tion Act in 1978 – alrea­dy men­tio­ned the dis­trust of com­pu­ter sys­tems that reduce human beings to a series of sta­tis­ti­cal pro­ba­bi­li­ties. By redu­cing us to num­bers, such sys­tems deny our indi­vi­dua­li­ty and huma­ni­ty. We are used to sta­tis­ti­cal pro­fi­ling when it comes to recei­ving an adver­ti­se­ment on Face­book or a music recom­men­da­tion on Dee­zer. But for more serious deci­sions – a hiring deci­sion, admis­sion to a uni­ver­si­ty, trig­ge­ring a tax audit, or get­ting a loan – being jud­ged sole­ly on a sta­tis­ti­cal pro­file is pro­ble­ma­tic, espe­cial­ly when the algo­rithm that creates the pro­file is unintelligible !

The algo­rithm should the­re­fore pro­vide sta­tis­ti­cal insight into the issue, but never replace the dis­cern­ment and nuance of a human deci­sion-maker. But beware, human short­co­mings should not be mini­mi­sed either – in the US it has been shown that judges make hea­vier pri­son deci­sions before lunch when they are hun­gry. Algo­rithms can help com­pen­sate for these human biases.

Support accurate information rooted in the scientific method.

Donate