The scourge of manipulation: how can we combat deepfakes?
- According to the AI Act, a deepfake is defined as an image, audio or video content manipulated by AI, which bears a resemblance to real people, objects, places, entities or events.
- There is a difference between digital “replicas” (i.e. imitations of a person) and digital “forgeries” (i.e. digital counterfeits).
- In 2023, 98% of manipulated videos accessible online were sexual in nature, and most targeted women.
- The European Commission wants to impose labelling requirements on online platforms and generative AI providers.
- In France, AI-generated content is punishable if the person depicted has not given their consent, or if the parodic nature of the content is not immediately apparent.
Legal authorities have had to address deepfakes, a growing phenomenon that can affect both individuals and social cohesion. How are they defined under European law?
Célia Zolynski. Deepfakes are content generated by artificial intelligence that can blur the line between truth and falsehood. They are currently being discussed at level of the European Union. According to the Artificial Intelligence Act (AI Act or RIA), published in July 2024, a deepfake is defined as an image, audio content (also known as fake voices) – voice cloning, which involves reproducing voices realistically using artificial intelligence – or video manipulated by AI, which bears a resemblance to real people, objects, places, entities or events. This content can be mistakenly perceived as authentic or truthful. By adopting such a broad spectrum, the European Union’s objective is to prevent any manipulation of public opinion and to regulate problematic cases involving the distortion of an individual’s image.
From a legal perspective, there is a difference between digital “replicas”, which encompass extensions of a person (i.e. an artist’s voice, the modification of their image), and digital “forgeries”. The latter category targets intimate representations of individuals and includes, for example, non-consensual sexual deepfakes. These digital forgeries are created to harm a specific individual and in many cases are followed by harassment or sexual blackmail (‘sextortion’). The perpetrator phishes the user on an online chat under the guise of a fake account. They establish a relationship of trust to receive initial intimate content. Once this content has been obtained, they can move on to blackmail, demanding money or further content. In some cases, the ultimate goal is criminal or paedophilic in nature.
What are the main dangers? Who are the most vulnerable to deepfakes?
At the individual level, in 2023, 96%1 of manipulated videos accessible online were sexual in nature and mainly targeted women. Between 2022 and 2023, the number of these sexual deepfakes reportedly increased sharply (+464%) according to the company Home Security Heroes2. This finding shows how the issue of deepfakes goes beyond a simple technological challenge. Women’s rights, the protection of their image and their position in the public sphere are under threat. Furthermore, many cases involved politicians or journalists. Various studies on the impact of deepfakes confirm this, all highlighting the risk of silencing women’s voices through these digital practices.

At the societal level, these manipulations also have consequences for democratic balance by increasing the risk of disinformation. For example, 2024 was historically the year with the highest number of elections worldwide, with 76 national elections (legislative or presidential) taking place. At the same time, the use of generative AI was booming, raising fears of the widespread use of deepfakes in disinformation strategies. For example, in Germany, during the 2025 parliamentary election campaign, false information aimed at destabilising public opinion circulated on social media. These included rumours claiming that paedophilia would be legalized and that 1.9 million Kenyan workers would be arriving. According to the German authorities, these rumours may have originated from interference networks linked to Russia.
What legal framework has been developed within the EU to limit the misuse of content? Is it sufficient?
To limit this phenomenon, legal instruments exist – the primary of which among them is the AI regulation proposed by the European Union. The text addresses deepfakes on several levels. In principle, they are not prohibited. The use of technologies as a new tool in a cultural, artistic, or scientific context is entirely legal. However, the regulation prohibits certain practices deemed to be overly harmful to people’s fundamental rights and freedoms, those that fall into the category of AI with unacceptable risks (Article 5). These include non-consensual sexual deepfakes targeting women or prepubescent children in sexually explicit situations.
Content that could threaten democratic processes is classified as high risk. The European Commission is working to compel online platforms and all generative AI providers to implement tools to limit the production and distribution of such content. Still in negotiations with players in the digital sector to define a code of good practice, the obligation to identify and label deepfakes should come into force in the summer of 2026. In other words, a label must be affixed to the generated image or video that cannot be removed. It remains to be seen what form the label should take to be well received by the public and whether this measure will really be sufficient to limit cases of manipulation.
In France, AI-generated content is punishable if the person depicted has not given their consent, or if the parodic nature is not immediately apparent. Deepfakes of a sexual nature are automatically punishable if the person has not consented to their distribution. Penalties can include imprisonment. Many countries have updated their legislation with penalties as soon as harm occurs. For example, the United States passed the Take it down act in May 2025, introducing specific penalties when children are targeted or in cases of sextortion. At the international level, several UNESCO studies have highlighted the criminal dangers and violations of women’s rights and image.
Law enforcement agencies sometimes find it difficult to investigate these cases due to the sheer volume of content in the digital space and their often limited resources. This makes it all the more necessary to raise awareness among the relevant audiences upstream. The issue of pedocriminal deepfakes is crucial, both in terms of its social importance and its volume. Awareness campaigns are insufficient, yet they are essential to protect and help children and prevent their isolation.

