Home / Chroniques / Is it possible to regulate AI?
A Human Like Android Ai Learning And Growing, Collecting Data From Servers With Blue Neon Glow. Generative AI
π Society π Science and technology

Is it possible to regulate AI?

Félicien Vallet
Félicien Vallet
Head of the AI department at the Commission Nationale de l'Informatique et des Libertés (CNIL) (French Data Protection Authority)
Key takeaways
  • The increasing use of AI in many areas raises the question of how it should be managed.
  • At present, there is no specific regulation of AI in Europe, despite the fact that it is a constantly evolving field.
  • The European Parliament has voted in favour of the AI Act, a regulatory text on artificial intelligence.
  • The use and rapid development of AI is a cause for concern and raises major issues of security, transparency and automation.
  • To address these issues, the French Data Protection Authority (CNIL) has set up a multidisciplinary department dedicated to AI.

The gen­er­a­tive Arti­fi­cial Intel­li­gence (AI) indus­try is boom­ing. Accord­ing to Bloomberg, it is expect­ed to reach $1,300 bil­lion by 2032. But this expo­nen­tial growth is caus­ing con­cern world­wide and rais­es ques­tions about the secu­ri­ty and leg­is­la­tion of this mar­ket. Faced with this grow­ing mar­ket, Microsoft, Google, Open-air and the start-up Anthrop­ic – four Amer­i­can AI giants – are join­ing forces to reg­u­late them­selves in the face of grow­ing mis­trust. Europe is con­sid­er­ing reg­u­la­tions, and the British Prime Min­is­ter, Rishi Sunak, has announced that the first glob­al sum­mit ded­i­cat­ed to arti­fi­cial intel­li­gence will be held in the UK by the end of the year.

Faced with the increas­ing­ly promi­nent role of AI sys­tems in our dai­ly lives, the CNIL has tak­en the unprece­dent­ed step of set­ting up a depart­ment specif­i­cal­ly ded­i­cat­ed to this field. Under the aegis of Féli­cien Val­let, the reg­u­la­to­ry author­i­ty is seek­ing to apply its reg­u­la­to­ry prin­ci­ples to the major issues of secu­ri­ty, trans­paren­cy and automation.

Why did the CNIL feel the need to set up a new department devoted exclusively to artificial intelligence?

The CNIL, a reg­u­la­to­ry author­i­ty since 1978, is respon­si­ble for data pro­tec­tion. Since 2018, our point of ref­er­ence in this area has been the RGPD. Late­ly, we have been asked to deal with issues relat­ing to the pro­cess­ing of per­son­al data that is increas­ing­ly based on AI, regard­less of the sec­tor of activ­i­ty. At the CNIL, we tend to be organ­ised on a sec­toral basis, with depart­ments ded­i­cat­ed to health or gov­ern­ment affairs, for exam­ple. The CNIL has observed that AI is being used more and more in the fight against tax fraud (e.g. auto­mat­ed detec­tion of swim­ming pools based on satel­lite images), in secu­ri­ty (e.g. aug­ment­ed video sur­veil­lance sys­tems that analyse human behav­iour), in health­care (e.g. diag­nos­tic assis­tance), and in edu­ca­tion (e.g. via learn­ing ana­lyt­ics, aimed at per­son­al­is­ing learn­ing paths). As a reg­u­la­tor of per­son­al data pro­cess­ing, the CNIL is pay­ing par­tic­u­lar atten­tion to the uses of AI that are like­ly to have an impact on cit­i­zens. The cre­ation of a mul­ti­dis­ci­pli­nary depart­ment ded­i­cat­ed to AI is explained by the cross-dis­ci­pli­nary nature of the issues involved in this field.

What is your definition of artificial intelligence? Is it restricted to the generative artificial intelligence that we hear so much about at the moment?

We don’t have a def­i­n­i­tion in the strict sense. The def­i­n­i­tion we pro­pose on our web­site refers to a log­i­cal and auto­mat­ed process, gen­er­al­ly based on an algo­rithm, with the aim of car­ry­ing out well-defined tasks. Accord­ing to the Euro­pean Par­lia­ment, it is a tool used by machines to “repro­duce behav­iours asso­ci­at­ed with humans, such as rea­son­ing, plan­ning and cre­ativ­i­ty”. Gen­er­a­tive arti­fi­cial intel­li­gence is one part of exist­ing arti­fi­cial intel­li­gence sys­tems, although this too rais­es ques­tions about the use of per­son­al data.

What is the CNIL’s approach to regulating AI?

The CNIL has a risk-based approach. This log­ic is at the heart of the IA Act, which clas­si­fies AI sys­tems into four cat­e­gories: unac­cept­able, high risk, lim­it­ed risk, and min­i­mal risk. The so-called unac­cept­able AI sys­tems can­not be imple­ment­ed on Euro­pean soil at all, as they do not fall with­in the reg­u­la­to­ry bounds. High-risk sys­tems, which are often deployed in sec­tors such as health­care or gov­ern­ment affairs, are par­tic­u­lar­ly sen­si­tive, as they can have a sig­nif­i­cant impact on indi­vid­u­als and often process per­son­al data. Spe­cial pre­cau­tions are tak­en before they are imple­ment­ed. Lim­it­ed-risk sys­tems, such as gen­er­a­tive AI, require greater trans­paren­cy for users. Min­i­mal-risk sys­tems are not sub­ject to any spe­cif­ic obligations.

What are the major issues surrounding these AI systems? 

The main issues are trans­paren­cy, automa­tion, and secu­ri­ty. Trans­paren­cy is cru­cial to ensure that peo­ple are informed about the pro­cess­ing of their data by AI sys­tems, and to enable them to exer­cise their rights. These sys­tems can use huge amounts of data, some­times with­out the knowl­edge of individuals.

Automa­tion also rais­es ques­tions, even when a human oper­a­tor is involved in the process to make final deci­sions. Cog­ni­tive bias­es, such as the ten­den­cy to place exces­sive trust in machines, can influ­ence deci­sion-mak­ing. It is essen­tial to be vig­i­lant regard­ing the oper­a­tor’s con­trol meth­ods and the way in which the oper­a­tor is actu­al­ly inte­grat­ed into the deci­sion-mak­ing loop.

The secu­ri­ty of AI sys­tems is anoth­er major con­cern. Like any IT sys­tem, they can be the tar­get of cyber-attacks, in par­tic­u­lar access hijack­ing or data theft. In addi­tion, they can be mali­cious­ly exploit­ed, for exam­ple to run phish­ing cam­paigns or spread dis­in­for­ma­tion on a large scale.

Is there already a method for implementing these regulations in the future?

Our action plan is struc­tured around four points. The first is to under­stand AI tech­nol­o­gy, a field that is con­stant­ly evolv­ing as each day brings new inno­va­tions and sci­en­tif­ic breakthroughs.

The sec­ond is to steer the use of AI. The RGPD is our ref­er­ence, but this text is tech­no­log­i­cal­ly neu­tral. It does not specif­i­cal­ly pre­scribe how per­son­al data should be han­dled in the con­text of AI. We there­fore need to adapt the gen­er­al prin­ci­ples of the GDPR to the dif­fer­ent tech­nolo­gies and uses of AI to pro­vide effec­tive guide­lines for professionals.

The third point is to devel­op inter­ac­tion and coop­er­a­tion with our Euro­pean coun­ter­parts, the Défenseur des droits (Defend­er of Rights), the Autorité de la con­cur­rence (French com­pe­ti­tion author­i­ty) and research insti­tutes to address issues relat­ing to dis­crim­i­na­tion, com­pe­ti­tion and inno­va­tion, with the aim of bring­ing togeth­er as many play­ers as pos­si­ble around these issues.

Final­ly, we need to put in place con­trols, both before and after the imple­men­ta­tion of AI sys­tems. We there­fore need to devel­op method­olo­gies for car­ry­ing out these checks, whether through check­lists, self-assess­ment guides or oth­er inno­v­a­tive tools.

Are there any other projects of this type?

Now, there are no reg­u­la­tions spe­cif­ic to AI, whether in France, Europe or else­where. The draft Euro­pean reg­u­la­tion will be a first in this area. How­ev­er, some gen­er­al reg­u­la­tions, such as the RGPD in Europe, apply indi­rect­ly to AI. Cer­tain sec­tor-spe­cif­ic reg­u­la­tions, such as those relat­ing to prod­uct safe­ty, may also apply to prod­ucts incor­po­rat­ing AI, such as med­ical devices.

Will the differences in regulations between Europe and the United States be even more marked when it comes to AI?

His­tor­i­cal­ly, Europe has been more proac­tive in imple­ment­ing reg­u­la­tions on dig­i­tal tech­nolo­gies, as demon­strat­ed by the adop­tion of the RGPD. How­ev­er, even in the US, the idea of reg­u­lat­ing AI has been gain­ing ground. For exam­ple, the CEO of Ope­nAI told the US Con­gress that AI reg­u­la­tion would be ben­e­fi­cial. It should be not­ed, how­ev­er, that what US tech­nol­o­gy exec­u­tives see as ade­quate reg­u­la­tion may not be exact­ly what Europe envis­ages. It is with the aim of antic­i­pat­ing the AI Act and secur­ing the sup­port of the major inter­na­tion­al indus­tri­al­ists in the field that Euro­pean Com­mis­sion­ers Mar­grethe Vestager (com­pe­ti­tion) and Thier­ry Bre­ton (inter­nal mar­ket) have pro­posed an AI Code of Con­duct and an AI Pact respectively.

Interview by Jean Zeid

Our world explained with science. Every week, in your inbox.

Get the newsletter