Home / Chroniques / The scourge of manipulation: how can we combat deepfakes?
Deepfake Ban and Regulating DeepFakes for Image Recognition Technology
Généré par l'IA / Generated using AI
π Digital π Society

The scourge of manipulation : how can we combat deepfakes ?

Célia Zolynski_VF
Célia Zolynski
Associate Professor of Private Law at Université Paris 1 Panthéon-Sorbonne
Key takeaways
  • According to the AI Act, a deepfake is defined as an image, audio or video content manipulated by AI, which bears a resemblance to real people, objects, places, entities or events.
  • There is a difference between digital “replicas” (i.e. imitations of a person) and digital “forgeries” (i.e. digital counterfeits).
  • In 2023, 98% of manipulated videos accessible online were sexual in nature, and most targeted women.
  • The European Commission wants to impose labelling requirements on online platforms and generative AI providers.
  • In France, AI-generated content is punishable if the person depicted has not given their consent, or if the parodic nature of the content is not immediately apparent.

Legal authorities have had to address deepfakes, a growing phenomenon that can affect both individuals and social cohesion. How are they defined under European law ?

Célia Zolyns­ki. Deep­fakes are content gene­ra­ted by arti­fi­cial intel­li­gence that can blur the line bet­ween truth and fal­se­hood. They are cur­rent­ly being dis­cus­sed at level of the Euro­pean Union. Accor­ding to the Arti­fi­cial Intel­li­gence Act (AI Act or RIA), publi­shed in July 2024, a deep­fake is defi­ned as an image, audio content (also known as fake voices) – voice clo­ning, which involves repro­du­cing voices rea­lis­ti­cal­ly using arti­fi­cial intel­li­gence – or video mani­pu­la­ted by AI, which bears a resem­blance to real people, objects, places, enti­ties or events. This content can be mis­ta­ken­ly per­cei­ved as authen­tic or tru­th­ful. By adop­ting such a broad spec­trum, the Euro­pean Union’s objec­tive is to prevent any mani­pu­la­tion of public opi­nion and to regu­late pro­ble­ma­tic cases invol­ving the dis­tor­tion of an indi­vi­dual’s image.

From a legal pers­pec­tive, there is a dif­fe­rence bet­ween digi­tal “repli­cas”, which encom­pass exten­sions of a per­son (i.e. an artist’s voice, the modi­fi­ca­tion of their image), and digi­tal “for­ge­ries”. The lat­ter cate­go­ry tar­gets inti­mate repre­sen­ta­tions of indi­vi­duals and includes, for example, non-consen­sual sexual deep­fakes. These digi­tal for­ge­ries are crea­ted to harm a spe­ci­fic indi­vi­dual and in many cases are fol­lo­wed by harass­ment or sexual bla­ck­mail (‘sex­tor­tion’). The per­pe­tra­tor phishes the user on an online chat under the guise of a fake account. They esta­blish a rela­tion­ship of trust to receive ini­tial inti­mate content. Once this content has been obtai­ned, they can move on to bla­ck­mail, deman­ding money or fur­ther content. In some cases, the ulti­mate goal is cri­mi­nal or pae­do­phi­lic in nature.

What are the main dangers ? Who are the most vulnerable to deepfakes ? 

At the indi­vi­dual level, in 2023, 96%1 of mani­pu­la­ted videos acces­sible online were sexual in nature and main­ly tar­ge­ted women. Bet­ween 2022 and 2023, the num­ber of these sexual deep­fakes repor­ted­ly increa­sed shar­ply (+464%) accor­ding to the com­pa­ny Home Secu­ri­ty Heroes2.  This fin­ding shows how the issue of deep­fakes goes beyond a simple tech­no­lo­gi­cal chal­lenge. Women’s rights, the pro­tec­tion of their image and their posi­tion in the public sphere are under threat. Fur­ther­more, many cases invol­ved poli­ti­cians or jour­na­lists. Various stu­dies on the impact of deep­fakes confirm this, all high­ligh­ting the risk of silen­cing women’s voices through these digi­tal practices.

At the socie­tal level, these mani­pu­la­tions also have conse­quences for demo­cra­tic balance by increa­sing the risk of dis­in­for­ma­tion. For example, 2024 was his­to­ri­cal­ly the year with the highest num­ber of elec­tions world­wide, with 76 natio­nal elec­tions (legis­la­tive or pre­si­den­tial) taking place. At the same time, the use of gene­ra­tive AI was boo­ming, rai­sing fears of the wides­pread use of deep­fakes in dis­in­for­ma­tion stra­te­gies. For example, in Ger­ma­ny, during the 2025 par­lia­men­ta­ry elec­tion cam­pai­gn, false infor­ma­tion aimed at des­ta­bi­li­sing public opi­nion cir­cu­la­ted on social media. These inclu­ded rumours clai­ming that pae­do­phi­lia would be lega­li­zed and that 1.9 mil­lion Kenyan wor­kers would be arri­ving. Accor­ding to the Ger­man autho­ri­ties, these rumours may have ori­gi­na­ted from inter­fe­rence net­works lin­ked to Russia.

What legal framework has been developed within the EU to limit the misuse of content ?  Is it sufficient ?

To limit this phe­no­me­non, legal ins­tru­ments exist – the pri­ma­ry of which among them is the AI regu­la­tion pro­po­sed by the Euro­pean Union. The text addresses deep­fakes on seve­ral levels. In prin­ciple, they are not pro­hi­bi­ted. The use of tech­no­lo­gies as a new tool in a cultu­ral, artis­tic, or scien­ti­fic context is enti­re­ly legal. Howe­ver, the regu­la­tion pro­hi­bits cer­tain prac­tices dee­med to be over­ly harm­ful to peo­ple’s fun­da­men­tal rights and free­doms, those that fall into the cate­go­ry of AI with unac­cep­table risks (Article 5). These include non-consen­sual sexual deep­fakes tar­ge­ting women or pre­pu­bes­cent chil­dren in sexual­ly expli­cit situations.

Content that could threa­ten demo­cra­tic pro­cesses is clas­si­fied as high risk. The Euro­pean Com­mis­sion is wor­king to com­pel online plat­forms and all gene­ra­tive AI pro­vi­ders to imple­ment tools to limit the pro­duc­tion and dis­tri­bu­tion of such content. Still in nego­tia­tions with players in the digi­tal sec­tor to define a code of good prac­tice, the obli­ga­tion to iden­ti­fy and label deep­fakes should come into force in the sum­mer of 2026. In other words, a label must be affixed to the gene­ra­ted image or video that can­not be remo­ved. It remains to be seen what form the label should take to be well recei­ved by the public and whe­ther this mea­sure will real­ly be suf­fi­cient to limit cases of manipulation.

In France, AI-gene­ra­ted content is puni­shable if the per­son depic­ted has not given their consent, or if the paro­dic nature is not imme­dia­te­ly appa­rent. Deep­fakes of a sexual nature are auto­ma­ti­cal­ly puni­shable if the per­son has not consen­ted to their dis­tri­bu­tion. Penal­ties can include impri­son­ment. Many coun­tries have upda­ted their legis­la­tion with penal­ties as soon as harm occurs. For example, the Uni­ted States pas­sed the Take it down act in May 2025, intro­du­cing spe­ci­fic penal­ties when chil­dren are tar­ge­ted or in cases of sex­tor­tion. At the inter­na­tio­nal level, seve­ral UNESCO stu­dies have high­ligh­ted the cri­mi­nal dan­gers and vio­la­tions of women’s rights and image.

Law enfor­ce­ment agen­cies some­times find it dif­fi­cult to inves­ti­gate these cases due to the sheer volume of content in the digi­tal space and their often limi­ted resources. This makes it all the more neces­sa­ry to raise awa­re­ness among the rele­vant audiences ups­tream. The issue of pedo­cri­mi­nal deep­fakes is cru­cial, both in terms of its social impor­tance and its volume. Awa­re­ness cam­pai­gns are insuf­fi­cient, yet they are essen­tial to pro­tect and help chil­dren and prevent their isolation.

Interview by Alicia Piveteau
1State of Deep­fakes 2023, https://​reg​me​dia​.co​.uk/​2​0​1​9​/​1​0​/​0​8​/​d​e​e​p​f​a​k​e​_​r​e​p​o​r​t.pdf
2https://​www​.secu​ri​ty​he​ro​.io/​s​t​a​t​e​-​o​f​-​d​e​e​p​f​a​k​e​s​/​a​s​s​e​t​s​/​p​d​f​/​s​t​a​t​e​-​o​f​-​d​e​e​p​f​a​k​e​-​i​n​f​o​g​r​a​p​h​i​c​-​2​0​2​3​.​p​d​f​?​u​t​m​_​s​o​u​r​c​e​=​c​h​a​t​g​p​t.com

Support accurate information rooted in the scientific method.

Donate