Home / Chroniques / How AI is affecting quality of factual information
People watching a lot of retro televisions. Propaganda and fake news concept. Politicians manipulate society with help of public television. Created with Generative AI
Généré par l'IA / Generated using AI
π Digital π Society

How AI is affecting quality of factual information

Thierry Warin
Thierry Warin
Professor of Data Science for Global Transformations at HEC Montreal
Key takeaways
  • According to NewsGuard, more than 2,089 AI-generated news sites are currently operating, publishing content in 16 languages.
  • In August 2025, leading AI chatbots relayed false claims in 35% of cases, compared to 18% the previous year.
  • According to the Entrust Identity Fraud Report's 2025 projection, a deepfake attack will occur every five minutes by 2024.
  • Furthermore, social media personalization algorithms contribute to the fragmentation of the public sphere, creating echo chambers.
  • It is necessary to evolve human countermeasures, such as moderation and media literacy, to adapt to this phenomenon.

A few years ago, cre­at­ing a con­vin­cing fake video required con­sid­er­able resources and advanced tech­nic­al expert­ise. Today, all it takes is a few dol­lars and a couple of minutes. This demo­crat­isa­tion of inform­a­tion manip­u­la­tion through gen­er­at­ive arti­fi­cial intel­li­gence poses new chal­lenges for inform­a­tion veri­fic­a­tion and trust in the media. Recent data illus­trates the scale of the phe­nomen­on. Accord­ing to News­Guard, more than 2,089 AI-gen­er­ated news sites are cur­rently oper­at­ing with little or no human over­sight, pub­lish­ing con­tent in 16 lan­guages, includ­ing French, Eng­lish, Arab­ic and Chinese. This rep­res­ents a 1,150% increase since April 2023.

“These social media plat­forms play a dual role,” explains Pro­fess­or Thi­erry War­in, an ana­lyst of eco­nom­ic dynam­ics in the era of big data and a spe­cial­ist in inform­a­tion issues in the digit­al age. “On the one hand, they demo­crat­ise speech. On the oth­er, they can become a vehicle for spread­ing fake news on a large scale.” AI tools them­selves some­times con­trib­ute to this phe­nomen­on. A study by News­Guard shows that in August 2025, the lead­ing AI chat­bots relayed false claims in 35% of cases, com­pared to 18% the pre­vi­ous year. Per­plex­ity went from a 100% false inform­a­tion refut­a­tion rate in 2024 to a 46.67% error rate in 2025. Chat­G­PT and Meta have an error rate of 40%.

Deep­fakes rep­res­ent a not­able devel­op­ment in the field of con­tent manip­u­la­tion. Accord­ing to Entrust’s 2025 Iden­tity Fraud Report, a deep­fake attack occurs every five minutes in 2024. Digit­al doc­u­ment for­ger­ies have increased by 244% com­pared to 2023, while over­all digit­al fraud has grown by 1,600% since 2021. “The Centre for Secur­ity and Emer­ging Tech­no­logy estim­ates that a basic deep­fake can be pro­duced for a few dol­lars and in less than ten minutes,” notes Thi­erry War­in. “High-qual­ity deep­fakes, on the oth­er hand, can cost between $300 and $20,000 per minute.”

Electoral interference and information manipulation

The year 2024, marked by a large num­ber of elec­tions around the world, saw the emer­gence of soph­ist­ic­ated dis­in­form­a­tion cam­paigns. The Dop­pel­gänger cam­paign, orches­trated by pro-Rus­si­an act­ors in the run-up to the 2024 European elec­tions, is a not­able example. It com­bined sev­en domains imper­son­at­ing recog­nised media out­lets, 47 inau­thent­ic web­sites and 657 art­icles amp­li­fied by thou­sands of auto­mated accounts. The ‘Portal Kom­bat’ net­work (also known as ‘Pravda’) illus­trates a sys­tem­at­ic approach to inform­a­tion dis­sem­in­a­tion. Accord­ing to VIGINUM, this Moscow-based net­work pub­lished 3.6 mil­lion art­icles in 2024 on glob­al online plat­forms. With 150 domain names in 46 lan­guages, it pub­lishes an aver­age of 20,273 art­icles every 48 hours.

News­Guard tested ten of the most pop­u­lar gen­er­at­ive AI mod­els: in 33% of cases, these mod­els repeated claims dis­sem­in­ated by the Pravda net­work. “This con­tent influ­ences arti­fi­cial intel­li­gence sys­tems that rely on this data to gen­er­ate their responses,” the report states. This tech­nique, known as ‘LLM groom­ing,’ involves sat­ur­at­ing search res­ults with biased data to influ­ence AI responses. “Many recent elec­tions have been marred by dis­in­form­a­tion cam­paigns,” Thi­erry War­in points out. “Dur­ing the 2016 US pres­id­en­tial elec­tion, the United States respon­ded to Rus­si­an inter­fer­ence by expelling 35 dip­lo­mats. With gen­er­at­ive AI, the scale of the phe­nomen­on has changed.”

The role of personalisation algorithms

Bey­ond the cre­ation of fake con­tent, social media per­son­al­isa­tion algorithms con­trib­ute to the frag­ment­a­tion of the pub­lic sphere. “These sys­tems tend to offer inter­net users con­tent that matches their pref­er­ences,” explains Pro­fess­or War­in. “This can cre­ate what are known as echo cham­bers.” Stud­ies show that on Face­book, only about 15% of inter­ac­tions involve expos­ure to diver­gent opin­ions. The con­tent that gen­er­ates the most inter­ac­tions is often amp­li­fied by algorithms, which can rein­force ideo­lo­gic­al divides.

In response to these devel­op­ments, sev­er­al ini­ti­at­ives have been put in place. Fin­land and Sweden score highest in media lit­er­acy, with 74 and 71 points respect­ively on the European Media Lit­er­acy Index 2023. The European Com­mis­sion has adop­ted the 2022 Strengthened Code of Prac­tice on Dis­in­form­a­tion to improve plat­form trans­par­ency. In Canada, the Com­mu­nic­a­tions Secur­ity Estab­lish­ment pub­lished its 2023 report Cyber Threats to Canada’s Demo­crat­ic Pro­cess – 2023 Update, which ana­lyses the use of gen­er­at­ive AI in con­texts of inform­a­tion interference.

“Tra­di­tion­al coun­ter­meas­ures – human mod­er­a­tion, fact-check­ing, media lit­er­acy – must evolve to adapt to the scale of the phe­nomen­on,” observes Thi­erry War­in. “Tech­no­lo­gic­al solu­tions, such as syn­thet­ic con­tent detect­ors and digit­al water­marks, are cur­rently being developed.”

Evolution of the information ecosystem

The year 2025 marks a sig­ni­fic­ant change. The non-response rate of AIs to sens­it­ive ques­tions has fallen to 0%, com­pared to 31% in 2024. On the oth­er hand, their propensity to repeat false inform­a­tion has increased. Mod­els now pri­or­it­ise respons­ive­ness, which can make them more vul­ner­able to unveri­fied con­tent online.

“The adage that ‘inform­a­tion is power’, attrib­uted to Car­din­al Riche­lieu, remains rel­ev­ant,” con­cludes Pro­fess­or War­in. “From print­ing to tele­vi­sion, each media revolu­tion has redis­trib­uted the power of inform­a­tion. With gen­er­at­ive AI, we are wit­ness­ing a major trans­form­a­tion of this eco­sys­tem.” The ques­tion now is how to adapt the mech­an­isms of veri­fic­a­tion and trust in inform­a­tion to this new tech­no­lo­gic­al real­ity. Ongo­ing ini­ti­at­ives, wheth­er tech­no­lo­gic­al, reg­u­lat­ory or edu­ca­tion­al, aim to address this challenge.

Support accurate information rooted in the scientific method.

Donate