The generative Artificial Intelligence (AI) industry is booming. According to Bloomberg, it is expected to reach $1,300 billion by 2032. But this exponential growth is causing concern worldwide and raises questions about the security and legislation of this market. Faced with this growing market, Microsoft, Google, Open-air and the start-up Anthropic – four American AI giants – are joining forces to regulate themselves in the face of growing mistrust. Europe is considering regulations, and the British Prime Minister, Rishi Sunak, has announced that the first global summit dedicated to artificial intelligence will be held in the UK by the end of the year.
Faced with the increasingly prominent role of AI systems in our daily lives, the CNIL has taken the unprecedented step of setting up a department specifically dedicated to this field. Under the aegis of Félicien Vallet, the regulatory authority is seeking to apply its regulatory principles to the major issues of security, transparency and automation.
Why did the CNIL feel the need to set up a new department devoted exclusively to artificial intelligence?
The CNIL, a regulatory authority since 1978, is responsible for data protection. Since 2018, our point of reference in this area has been the RGPD. Lately, we have been asked to deal with issues relating to the processing of personal data that is increasingly based on AI, regardless of the sector of activity. At the CNIL, we tend to be organised on a sectoral basis, with departments dedicated to health or government affairs, for example. The CNIL has observed that AI is being used more and more in the fight against tax fraud (e.g. automated detection of swimming pools based on satellite images), in security (e.g. augmented video surveillance systems that analyse human behaviour), in healthcare (e.g. diagnostic assistance), and in education (e.g. via learning analytics, aimed at personalising learning paths). As a regulator of personal data processing, the CNIL is paying particular attention to the uses of AI that are likely to have an impact on citizens. The creation of a multidisciplinary department dedicated to AI is explained by the cross-disciplinary nature of the issues involved in this field.
What is your definition of artificial intelligence? Is it restricted to the generative artificial intelligence that we hear so much about at the moment?
We don’t have a definition in the strict sense. The definition we propose on our website refers to a logical and automated process, generally based on an algorithm, with the aim of carrying out well-defined tasks. According to the European Parliament, it is a tool used by machines to “reproduce behaviours associated with humans, such as reasoning, planning and creativity”. Generative artificial intelligence is one part of existing artificial intelligence systems, although this too raises questions about the use of personal data.
What is the CNIL’s approach to regulating AI?
The CNIL has a risk-based approach. This logic is at the heart of the IA Act, which classifies AI systems into four categories: unacceptable, high risk, limited risk, and minimal risk. The so-called unacceptable AI systems cannot be implemented on European soil at all, as they do not fall within the regulatory bounds. High-risk systems, which are often deployed in sectors such as healthcare or government affairs, are particularly sensitive, as they can have a significant impact on individuals and often process personal data. Special precautions are taken before they are implemented. Limited-risk systems, such as generative AI, require greater transparency for users. Minimal-risk systems are not subject to any specific obligations.
What are the major issues surrounding these AI systems?
The main issues are transparency, automation, and security. Transparency is crucial to ensure that people are informed about the processing of their data by AI systems, and to enable them to exercise their rights. These systems can use huge amounts of data, sometimes without the knowledge of individuals.
Automation also raises questions, even when a human operator is involved in the process to make final decisions. Cognitive biases, such as the tendency to place excessive trust in machines, can influence decision-making. It is essential to be vigilant regarding the operator’s control methods and the way in which the operator is actually integrated into the decision-making loop.
The security of AI systems is another major concern. Like any IT system, they can be the target of cyber-attacks, in particular access hijacking or data theft. In addition, they can be maliciously exploited, for example to run phishing campaigns or spread disinformation on a large scale.
Is there already a method for implementing these regulations in the future?
Our action plan is structured around four points. The first is to understand AI technology, a field that is constantly evolving as each day brings new innovations and scientific breakthroughs.
The second is to steer the use of AI. The RGPD is our reference, but this text is technologically neutral. It does not specifically prescribe how personal data should be handled in the context of AI. We therefore need to adapt the general principles of the GDPR to the different technologies and uses of AI to provide effective guidelines for professionals.
The third point is to develop interaction and cooperation with our European counterparts, the Défenseur des droits (Defender of Rights), the Autorité de la concurrence (French competition authority) and research institutes to address issues relating to discrimination, competition and innovation, with the aim of bringing together as many players as possible around these issues.
Finally, we need to put in place controls, both before and after the implementation of AI systems. We therefore need to develop methodologies for carrying out these checks, whether through checklists, self-assessment guides or other innovative tools.
Are there any other projects of this type?
Now, there are no regulations specific to AI, whether in France, Europe or elsewhere. The draft European regulation will be a first in this area. However, some general regulations, such as the RGPD in Europe, apply indirectly to AI. Certain sector-specific regulations, such as those relating to product safety, may also apply to products incorporating AI, such as medical devices.
Will the differences in regulations between Europe and the United States be even more marked when it comes to AI?
Historically, Europe has been more proactive in implementing regulations on digital technologies, as demonstrated by the adoption of the RGPD. However, even in the US, the idea of regulating AI has been gaining ground. For example, the CEO of OpenAI told the US Congress that AI regulation would be beneficial. It should be noted, however, that what US technology executives see as adequate regulation may not be exactly what Europe envisages. It is with the aim of anticipating the AI Act and securing the support of the major international industrialists in the field that European Commissioners Margrethe Vestager (competition) and Thierry Breton (internal market) have proposed an AI Code of Conduct and an AI Pact respectively.