Home / Chroniques / The scourge of manipulation: how can we combat deepfakes?
Deepfake Ban and Regulating DeepFakes for Image Recognition Technology
Généré par l'IA / Generated using AI
π Digital π Society

The scourge of manipulation: how can we combat deepfakes?

Célia Zolynski_VF
Célia Zolynski
Associate Professor of Private Law at Université Paris 1 Panthéon-Sorbonne
Key takeaways
  • According to the AI Act, a deepfake is defined as an image, audio or video content manipulated by AI, which bears a resemblance to real people, objects, places, entities or events.
  • There is a difference between digital “replicas” (i.e. imitations of a person) and digital “forgeries” (i.e. digital counterfeits).
  • In 2023, 98% of manipulated videos accessible online were sexual in nature, and most targeted women.
  • The European Commission wants to impose labelling requirements on online platforms and generative AI providers.
  • In France, AI-generated content is punishable if the person depicted has not given their consent, or if the parodic nature of the content is not immediately apparent.

Legal authorities have had to address deepfakes, a growing phenomenon that can affect both individuals and social cohesion. How are they defined under European law?

Célia Zolyn­ski. Deep­fakes are con­tent gen­er­ated by arti­fi­cial intel­li­gence that can blur the line between truth and false­hood. They are cur­rently being dis­cussed at level of the European Uni­on. Accord­ing to the Arti­fi­cial Intel­li­gence Act (AI Act or RIA), pub­lished in July 2024, a deep­fake is defined as an image, audio con­tent (also known as fake voices) – voice clon­ing, which involves repro­du­cing voices real­ist­ic­ally using arti­fi­cial intel­li­gence – or video manip­u­lated by AI, which bears a resemb­lance to real people, objects, places, entit­ies or events. This con­tent can be mis­takenly per­ceived as authen­t­ic or truth­ful. By adopt­ing such a broad spec­trum, the European Uni­on’s object­ive is to pre­vent any manip­u­la­tion of pub­lic opin­ion and to reg­u­late prob­lem­at­ic cases involving the dis­tor­tion of an indi­vidu­al’s image.

From a leg­al per­spect­ive, there is a dif­fer­ence between digit­al “rep­licas”, which encom­pass exten­sions of a per­son (i.e. an artist’s voice, the modi­fic­a­tion of their image), and digit­al “for­ger­ies”. The lat­ter cat­egory tar­gets intim­ate rep­res­ent­a­tions of indi­vidu­als and includes, for example, non-con­sen­su­al sexu­al deep­fakes. These digit­al for­ger­ies are cre­ated to harm a spe­cif­ic indi­vidu­al and in many cases are fol­lowed by har­ass­ment or sexu­al black­mail (‘sex­tor­tion’). The per­pet­rat­or phishes the user on an online chat under the guise of a fake account. They estab­lish a rela­tion­ship of trust to receive ini­tial intim­ate con­tent. Once this con­tent has been obtained, they can move on to black­mail, demand­ing money or fur­ther con­tent. In some cases, the ulti­mate goal is crim­in­al or pae­do­phil­ic in nature.

What are the main dangers? Who are the most vulnerable to deepfakes? 

At the indi­vidu­al level, in 2023, 96%1 of manip­u­lated videos access­ible online were sexu­al in nature and mainly tar­geted women. Between 2022 and 2023, the num­ber of these sexu­al deep­fakes reportedly increased sharply (+464%) accord­ing to the com­pany Home Secur­ity Her­oes2.  This find­ing shows how the issue of deep­fakes goes bey­ond a simple tech­no­lo­gic­al chal­lenge. Women’s rights, the pro­tec­tion of their image and their pos­i­tion in the pub­lic sphere are under threat. Fur­ther­more, many cases involved politi­cians or journ­al­ists. Vari­ous stud­ies on the impact of deep­fakes con­firm this, all high­light­ing the risk of silen­cing women’s voices through these digit­al practices.

At the soci­et­al level, these manip­u­la­tions also have con­sequences for demo­crat­ic bal­ance by increas­ing the risk of dis­in­form­a­tion. For example, 2024 was his­tor­ic­ally the year with the highest num­ber of elec­tions world­wide, with 76 nation­al elec­tions (legis­lat­ive or pres­id­en­tial) tak­ing place. At the same time, the use of gen­er­at­ive AI was boom­ing, rais­ing fears of the wide­spread use of deep­fakes in dis­in­form­a­tion strategies. For example, in Ger­many, dur­ing the 2025 par­lia­ment­ary elec­tion cam­paign, false inform­a­tion aimed at destabil­ising pub­lic opin­ion cir­cu­lated on social media. These included rumours claim­ing that pae­do­phil­ia would be leg­al­ized and that 1.9 mil­lion Kenyan work­ers would be arriv­ing. Accord­ing to the Ger­man author­it­ies, these rumours may have ori­gin­ated from inter­fer­ence net­works linked to Russia.

What legal framework has been developed within the EU to limit the misuse of content?  Is it sufficient?

To lim­it this phe­nomen­on, leg­al instru­ments exist – the primary of which among them is the AI reg­u­la­tion pro­posed by the European Uni­on. The text addresses deep­fakes on sev­er­al levels. In prin­ciple, they are not pro­hib­ited. The use of tech­no­lo­gies as a new tool in a cul­tur­al, artist­ic, or sci­entif­ic con­text is entirely leg­al. How­ever, the reg­u­la­tion pro­hib­its cer­tain prac­tices deemed to be overly harm­ful to people’s fun­da­ment­al rights and freedoms, those that fall into the cat­egory of AI with unac­cept­able risks (Art­icle 5). These include non-con­sen­su­al sexu­al deep­fakes tar­get­ing women or pre­pu­bes­cent chil­dren in sexu­ally expli­cit situations.

Con­tent that could threaten demo­crat­ic pro­cesses is clas­si­fied as high risk. The European Com­mis­sion is work­ing to com­pel online plat­forms and all gen­er­at­ive AI pro­viders to imple­ment tools to lim­it the pro­duc­tion and dis­tri­bu­tion of such con­tent. Still in nego­ti­ations with play­ers in the digit­al sec­tor to define a code of good prac­tice, the oblig­a­tion to identi­fy and label deep­fakes should come into force in the sum­mer of 2026. In oth­er words, a label must be affixed to the gen­er­ated image or video that can­not be removed. It remains to be seen what form the label should take to be well received by the pub­lic and wheth­er this meas­ure will really be suf­fi­cient to lim­it cases of manipulation.

In France, AI-gen­er­ated con­tent is pun­ish­able if the per­son depic­ted has not giv­en their con­sent, or if the par­od­ic nature is not imme­di­ately appar­ent. Deep­fakes of a sexu­al nature are auto­mat­ic­ally pun­ish­able if the per­son has not con­sen­ted to their dis­tri­bu­tion. Pen­al­ties can include impris­on­ment. Many coun­tries have updated their legis­la­tion with pen­al­ties as soon as harm occurs. For example, the United States passed the Take it down act in May 2025, intro­du­cing spe­cif­ic pen­al­ties when chil­dren are tar­geted or in cases of sex­tor­tion. At the inter­na­tion­al level, sev­er­al UNESCO stud­ies have high­lighted the crim­in­al dangers and viol­a­tions of women’s rights and image.

Law enforce­ment agen­cies some­times find it dif­fi­cult to invest­ig­ate these cases due to the sheer volume of con­tent in the digit­al space and their often lim­ited resources. This makes it all the more neces­sary to raise aware­ness among the rel­ev­ant audi­ences upstream. The issue of pedo­crim­in­al deep­fakes is cru­cial, both in terms of its social import­ance and its volume. Aware­ness cam­paigns are insuf­fi­cient, yet they are essen­tial to pro­tect and help chil­dren and pre­vent their isolation.

Interview by Alicia Piveteau
1State of Deep­fakes 2023, https://​reg​me​dia​.co​.uk/​2​0​1​9​/​1​0​/​0​8​/​d​e​e​p​f​a​k​e​_​r​e​p​o​r​t.pdf
2https://​www​.secur​ity​hero​.io/​s​t​a​t​e​-​o​f​-​d​e​e​p​f​a​k​e​s​/​a​s​s​e​t​s​/​p​d​f​/​s​t​a​t​e​-​o​f​-​d​e​e​p​f​a​k​e​-​i​n​f​o​g​r​a​p​h​i​c​-​2​0​2​3​.​p​d​f​?​u​t​m​_​s​o​u​r​c​e​=​c​h​a​t​g​p​t.com

Support accurate information rooted in the scientific method.

Donate