Home / Chroniques / The scourge of manipulation: how can we combat deepfakes?
Deepfake Ban and Regulating DeepFakes for Image Recognition Technology
Généré par l'IA / Generated using AI
π Digital π Society

The scourge of manipulation: how can we combat deepfakes?

Célia Zolynski_VF
Célia Zolynski
Associate Professor of Private Law at Université Paris 1 Panthéon-Sorbonne
Key takeaways
  • According to the AI Act, a deepfake is defined as an image, audio or video content manipulated by AI, which bears a resemblance to real people, objects, places, entities or events.
  • There is a difference between digital “replicas” (i.e. imitations of a person) and digital “forgeries” (i.e. digital counterfeits).
  • In 2023, 98% of manipulated videos accessible online were sexual in nature, and most targeted women.
  • The European Commission wants to impose labelling requirements on online platforms and generative AI providers.
  • In France, AI-generated content is punishable if the person depicted has not given their consent, or if the parodic nature of the content is not immediately apparent.

Legal authorities have had to address deepfakes, a growing phenomenon that can affect both individuals and social cohesion. How are they defined under European law?

Célia Zolyn­s­ki. Deep­fakes are con­tent gen­er­at­ed by arti­fi­cial intel­li­gence that can blur the line between truth and false­hood. They are cur­rent­ly being dis­cussed at lev­el of the Euro­pean Union. Accord­ing to the Arti­fi­cial Intel­li­gence Act (AI Act or RIA), pub­lished in July 2024, a deep­fake is defined as an image, audio con­tent (also known as fake voic­es) – voice cloning, which involves repro­duc­ing voic­es real­is­ti­cal­ly using arti­fi­cial intel­li­gence – or video manip­u­lat­ed by AI, which bears a resem­blance to real peo­ple, objects, places, enti­ties or events. This con­tent can be mis­tak­en­ly per­ceived as authen­tic or truth­ful. By adopt­ing such a broad spec­trum, the Euro­pean Union’s objec­tive is to pre­vent any manip­u­la­tion of pub­lic opin­ion and to reg­u­late prob­lem­at­ic cas­es involv­ing the dis­tor­tion of an indi­vid­u­al’s image.

From a legal per­spec­tive, there is a dif­fer­ence between dig­i­tal “repli­cas”, which encom­pass exten­sions of a per­son (i.e. an artist’s voice, the mod­i­fi­ca­tion of their image), and dig­i­tal “forg­eries”. The lat­ter cat­e­go­ry tar­gets inti­mate rep­re­sen­ta­tions of indi­vid­u­als and includes, for exam­ple, non-con­sen­su­al sex­u­al deep­fakes. These dig­i­tal forg­eries are cre­at­ed to harm a spe­cif­ic indi­vid­ual and in many cas­es are fol­lowed by harass­ment or sex­u­al black­mail (‘sex­tor­tion’). The per­pe­tra­tor phish­es the user on an online chat under the guise of a fake account. They estab­lish a rela­tion­ship of trust to receive ini­tial inti­mate con­tent. Once this con­tent has been obtained, they can move on to black­mail, demand­ing mon­ey or fur­ther con­tent. In some cas­es, the ulti­mate goal is crim­i­nal or pae­dophilic in nature.

What are the main dangers? Who are the most vulnerable to deepfakes? 

At the indi­vid­ual lev­el, in 2023, 96%1 of manip­u­lat­ed videos acces­si­ble online were sex­u­al in nature and main­ly tar­get­ed women. Between 2022 and 2023, the num­ber of these sex­u­al deep­fakes report­ed­ly increased sharply (+464%) accord­ing to the com­pa­ny Home Secu­ri­ty Heroes2.  This find­ing shows how the issue of deep­fakes goes beyond a sim­ple tech­no­log­i­cal chal­lenge. Wom­en’s rights, the pro­tec­tion of their image and their posi­tion in the pub­lic sphere are under threat. Fur­ther­more, many cas­es involved politi­cians or jour­nal­ists. Var­i­ous stud­ies on the impact of deep­fakes con­firm this, all high­light­ing the risk of silenc­ing wom­en’s voic­es through these dig­i­tal practices.

At the soci­etal lev­el, these manip­u­la­tions also have con­se­quences for demo­c­ra­t­ic bal­ance by increas­ing the risk of dis­in­for­ma­tion. For exam­ple, 2024 was his­tor­i­cal­ly the year with the high­est num­ber of elec­tions world­wide, with 76 nation­al elec­tions (leg­isla­tive or pres­i­den­tial) tak­ing place. At the same time, the use of gen­er­a­tive AI was boom­ing, rais­ing fears of the wide­spread use of deep­fakes in dis­in­for­ma­tion strate­gies. For exam­ple, in Ger­many, dur­ing the 2025 par­lia­men­tary elec­tion cam­paign, false infor­ma­tion aimed at desta­bil­is­ing pub­lic opin­ion cir­cu­lat­ed on social media. These includ­ed rumours claim­ing that pae­dophil­ia would be legal­ized and that 1.9 mil­lion Kenyan work­ers would be arriv­ing. Accord­ing to the Ger­man author­i­ties, these rumours may have orig­i­nat­ed from inter­fer­ence net­works linked to Russia.

What legal framework has been developed within the EU to limit the misuse of content?  Is it sufficient?

To lim­it this phe­nom­e­non, legal instru­ments exist – the pri­ma­ry of which among them is the AI reg­u­la­tion pro­posed by the Euro­pean Union. The text address­es deep­fakes on sev­er­al lev­els. In prin­ci­ple, they are not pro­hib­it­ed. The use of tech­nolo­gies as a new tool in a cul­tur­al, artis­tic, or sci­en­tif­ic con­text is entire­ly legal. How­ev­er, the reg­u­la­tion pro­hibits cer­tain prac­tices deemed to be over­ly harm­ful to peo­ple’s fun­da­men­tal rights and free­doms, those that fall into the cat­e­go­ry of AI with unac­cept­able risks (Arti­cle 5). These include non-con­sen­su­al sex­u­al deep­fakes tar­get­ing women or pre­pu­bes­cent chil­dren in sex­u­al­ly explic­it situations.

Con­tent that could threat­en demo­c­ra­t­ic process­es is clas­si­fied as high risk. The Euro­pean Com­mis­sion is work­ing to com­pel online plat­forms and all gen­er­a­tive AI providers to imple­ment tools to lim­it the pro­duc­tion and dis­tri­b­u­tion of such con­tent. Still in nego­ti­a­tions with play­ers in the dig­i­tal sec­tor to define a code of good prac­tice, the oblig­a­tion to iden­ti­fy and label deep­fakes should come into force in the sum­mer of 2026. In oth­er words, a label must be affixed to the gen­er­at­ed image or video that can­not be removed. It remains to be seen what form the label should take to be well received by the pub­lic and whether this mea­sure will real­ly be suf­fi­cient to lim­it cas­es of manipulation.

In France, AI-gen­er­at­ed con­tent is pun­ish­able if the per­son depict­ed has not giv­en their con­sent, or if the par­o­d­ic nature is not imme­di­ate­ly appar­ent. Deep­fakes of a sex­u­al nature are auto­mat­i­cal­ly pun­ish­able if the per­son has not con­sent­ed to their dis­tri­b­u­tion. Penal­ties can include impris­on­ment. Many coun­tries have updat­ed their leg­is­la­tion with penal­ties as soon as harm occurs. For exam­ple, the Unit­ed States passed the Take it down act in May 2025, intro­duc­ing spe­cif­ic penal­ties when chil­dren are tar­get­ed or in cas­es of sex­tor­tion. At the inter­na­tion­al lev­el, sev­er­al UNESCO stud­ies have high­light­ed the crim­i­nal dan­gers and vio­la­tions of wom­en’s rights and image.

Law enforce­ment agen­cies some­times find it dif­fi­cult to inves­ti­gate these cas­es due to the sheer vol­ume of con­tent in the dig­i­tal space and their often lim­it­ed resources. This makes it all the more nec­es­sary to raise aware­ness among the rel­e­vant audi­ences upstream. The issue of pedocrim­i­nal deep­fakes is cru­cial, both in terms of its social impor­tance and its vol­ume. Aware­ness cam­paigns are insuf­fi­cient, yet they are essen­tial to pro­tect and help chil­dren and pre­vent their isolation.

Interview by Alicia Piveteau
1State of Deep­fakes 2023, https://​reg​me​dia​.co​.uk/​2​0​1​9​/​1​0​/​0​8​/​d​e​e​p​f​a​k​e​_​r​e​p​o​r​t.pdf
2https://​www​.secu​ri​ty​hero​.io/​s​t​a​t​e​-​o​f​-​d​e​e​p​f​a​k​e​s​/​a​s​s​e​t​s​/​p​d​f​/​s​t​a​t​e​-​o​f​-​d​e​e​p​f​a​k​e​-​i​n​f​o​g​r​a​p​h​i​c​-​2​0​2​3​.​p​d​f​?​u​t​m​_​s​o​u​r​c​e​=​c​h​a​t​g​p​t.com

Our world through the lens of science. Every week, in your inbox.

Get the newsletter