1_cnil
π Digital π Society
How can artificial intelligence be regulated?

Is it possible to regulate AI?

with Félicien Vallet, Head of the AI department at the Commission Nationale de l'Informatique et des Libertés (CNIL) (French Data Protection Authority)
On September 20th, 2023 |
4 min reading time
Félicien Vallet
Félicien Vallet
Head of the AI department at the Commission Nationale de l'Informatique et des Libertés (CNIL) (French Data Protection Authority)
Key takeaways
  • The increasing use of AI in many areas raises the question of how it should be managed.
  • At present, there is no specific regulation of AI in Europe, despite the fact that it is a constantly evolving field.
  • The European Parliament has voted in favour of the AI Act, a regulatory text on artificial intelligence.
  • The use and rapid development of AI is a cause for concern and raises major issues of security, transparency and automation.
  • To address these issues, the French Data Protection Authority (CNIL) has set up a multidisciplinary department dedicated to AI.

The gen­er­at­ive Arti­fi­cial Intel­li­gence (AI) industry is boom­ing. Accord­ing to Bloomberg, it is expec­ted to reach $1,300 bil­lion by 2032. But this expo­nen­tial growth is caus­ing con­cern world­wide and raises ques­tions about the secur­ity and legis­la­tion of this mar­ket. Faced with this grow­ing mar­ket, Microsoft, Google, Open-air and the start-up Anthrop­ic – four Amer­ic­an AI giants – are join­ing forces to reg­u­late them­selves in the face of grow­ing mis­trust. Europe is con­sid­er­ing reg­u­la­tions, and the Brit­ish Prime Min­is­ter, Rishi Sunak, has announced that the first glob­al sum­mit ded­ic­ated to arti­fi­cial intel­li­gence will be held in the UK by the end of the year.

Faced with the increas­ingly prom­in­ent role of AI sys­tems in our daily lives, the CNIL has taken the unpre­ced­en­ted step of set­ting up a depart­ment spe­cific­ally ded­ic­ated to this field. Under the aegis of Félicien Val­let, the reg­u­lat­ory author­ity is seek­ing to apply its reg­u­lat­ory prin­ciples to the major issues of secur­ity, trans­par­ency and automation.

Why did the CNIL feel the need to set up a new department devoted exclusively to artificial intelligence?

The CNIL, a reg­u­lat­ory author­ity since 1978, is respons­ible for data pro­tec­tion. Since 2018, our point of ref­er­ence in this area has been the RGPD. Lately, we have been asked to deal with issues relat­ing to the pro­cessing of per­son­al data that is increas­ingly based on AI, regard­less of the sec­tor of activ­ity. At the CNIL, we tend to be organ­ised on a sec­tor­al basis, with depart­ments ded­ic­ated to health or gov­ern­ment affairs, for example. The CNIL has observed that AI is being used more and more in the fight against tax fraud (e.g. auto­mated detec­tion of swim­ming pools based on satel­lite images), in secur­ity (e.g. aug­men­ted video sur­veil­lance sys­tems that ana­lyse human beha­viour), in health­care (e.g. dia­gnost­ic assist­ance), and in edu­ca­tion (e.g. via learn­ing ana­lyt­ics, aimed at per­son­al­ising learn­ing paths). As a reg­u­lat­or of per­son­al data pro­cessing, the CNIL is pay­ing par­tic­u­lar atten­tion to the uses of AI that are likely to have an impact on cit­izens. The cre­ation of a mul­tidiscip­lin­ary depart­ment ded­ic­ated to AI is explained by the cross-dis­cip­lin­ary nature of the issues involved in this field.

What is your definition of artificial intelligence? Is it restricted to the generative artificial intelligence that we hear so much about at the moment?

We don’t have a defin­i­tion in the strict sense. The defin­i­tion we pro­pose on our web­site refers to a logic­al and auto­mated pro­cess, gen­er­ally based on an algorithm, with the aim of car­ry­ing out well-defined tasks. Accord­ing to the European Par­lia­ment, it is a tool used by machines to “repro­duce beha­viours asso­ci­ated with humans, such as reas­on­ing, plan­ning and cre­ativ­ity”. Gen­er­at­ive arti­fi­cial intel­li­gence is one part of exist­ing arti­fi­cial intel­li­gence sys­tems, although this too raises ques­tions about the use of per­son­al data.

What is the CNIL’s approach to regulating AI?

The CNIL has a risk-based approach. This logic is at the heart of the IA Act, which clas­si­fies AI sys­tems into four cat­egor­ies: unac­cept­able, high risk, lim­ited risk, and min­im­al risk. The so-called unac­cept­able AI sys­tems can­not be imple­men­ted on European soil at all, as they do not fall with­in the reg­u­lat­ory bounds. High-risk sys­tems, which are often deployed in sec­tors such as health­care or gov­ern­ment affairs, are par­tic­u­larly sens­it­ive, as they can have a sig­ni­fic­ant impact on indi­vidu­als and often pro­cess per­son­al data. Spe­cial pre­cau­tions are taken before they are imple­men­ted. Lim­ited-risk sys­tems, such as gen­er­at­ive AI, require great­er trans­par­ency for users. Min­im­al-risk sys­tems are not sub­ject to any spe­cif­ic obligations.

What are the major issues surrounding these AI systems? 

The main issues are trans­par­ency, auto­ma­tion, and secur­ity. Trans­par­ency is cru­cial to ensure that people are informed about the pro­cessing of their data by AI sys­tems, and to enable them to exer­cise their rights. These sys­tems can use huge amounts of data, some­times without the know­ledge of individuals.

Auto­ma­tion also raises ques­tions, even when a human oper­at­or is involved in the pro­cess to make final decisions. Cog­nit­ive biases, such as the tend­ency to place excess­ive trust in machines, can influ­ence decision-mak­ing. It is essen­tial to be vigil­ant regard­ing the oper­at­or’s con­trol meth­ods and the way in which the oper­at­or is actu­ally integ­rated into the decision-mak­ing loop.

The secur­ity of AI sys­tems is anoth­er major con­cern. Like any IT sys­tem, they can be the tar­get of cyber-attacks, in par­tic­u­lar access hijack­ing or data theft. In addi­tion, they can be mali­ciously exploited, for example to run phish­ing cam­paigns or spread dis­in­form­a­tion on a large scale.

Is there already a method for implementing these regulations in the future?

Our action plan is struc­tured around four points. The first is to under­stand AI tech­no­logy, a field that is con­stantly evolving as each day brings new innov­a­tions and sci­entif­ic breakthroughs.

The second is to steer the use of AI. The RGPD is our ref­er­ence, but this text is tech­no­lo­gic­ally neut­ral. It does not spe­cific­ally pre­scribe how per­son­al data should be handled in the con­text of AI. We there­fore need to adapt the gen­er­al prin­ciples of the GDPR to the dif­fer­ent tech­no­lo­gies and uses of AI to provide effect­ive guidelines for professionals.

The third point is to devel­op inter­ac­tion and cooper­a­tion with our European coun­ter­parts, the Défen­seur des droits (Defend­er of Rights), the Autor­ité de la con­cur­rence (French com­pet­i­tion author­ity) and research insti­tutes to address issues relat­ing to dis­crim­in­a­tion, com­pet­i­tion and innov­a­tion, with the aim of bring­ing togeth­er as many play­ers as pos­sible around these issues.

Finally, we need to put in place con­trols, both before and after the imple­ment­a­tion of AI sys­tems. We there­fore need to devel­op meth­od­o­lo­gies for car­ry­ing out these checks, wheth­er through check­lists, self-assess­ment guides or oth­er innov­at­ive tools.

Are there any other projects of this type?

Now, there are no reg­u­la­tions spe­cif­ic to AI, wheth­er in France, Europe or else­where. The draft European reg­u­la­tion will be a first in this area. How­ever, some gen­er­al reg­u­la­tions, such as the RGPD in Europe, apply indir­ectly to AI. Cer­tain sec­tor-spe­cif­ic reg­u­la­tions, such as those relat­ing to product safety, may also apply to products incor­por­at­ing AI, such as med­ic­al devices.

Will the differences in regulations between Europe and the United States be even more marked when it comes to AI?

His­tor­ic­ally, Europe has been more pro­act­ive in imple­ment­ing reg­u­la­tions on digit­al tech­no­lo­gies, as demon­strated by the adop­tion of the RGPD. How­ever, even in the US, the idea of reg­u­lat­ing AI has been gain­ing ground. For example, the CEO of OpenAI told the US Con­gress that AI reg­u­la­tion would be bene­fi­cial. It should be noted, how­ever, that what US tech­no­logy exec­ut­ives see as adequate reg­u­la­tion may not be exactly what Europe envis­ages. It is with the aim of anti­cip­at­ing the AI Act and secur­ing the sup­port of the major inter­na­tion­al indus­tri­al­ists in the field that European Com­mis­sion­ers Mar­grethe Vestager (com­pet­i­tion) and Thi­erry Bre­ton (intern­al mar­ket) have pro­posed an AI Code of Con­duct and an AI Pact respectively.

Interview by Jean Zeid

Support accurate information rooted in the scientific method.

Donate