Home / Chroniques / How AI is affecting quality of factual information
People watching a lot of retro televisions. Propaganda and fake news concept. Politicians manipulate society with help of public television. Created with Generative AI
Généré par l'IA / Generated using AI
π Digital π Society

How AI is affecting quality of factual information

Thierry Warin
Thierry Warin
Professor of Data Science for Global Transformations at HEC Montreal
Key takeaways
  • According to NewsGuard, more than 2,089 AI-generated news sites are currently operating, publishing content in 16 languages.
  • In August 2025, leading AI chatbots relayed false claims in 35% of cases, compared to 18% the previous year.
  • According to the Entrust Identity Fraud Report's 2025 projection, a deepfake attack will occur every five minutes by 2024.
  • Furthermore, social media personalization algorithms contribute to the fragmentation of the public sphere, creating echo chambers.
  • It is necessary to evolve human countermeasures, such as moderation and media literacy, to adapt to this phenomenon.

A few years ago, cre­at­ing a con­vinc­ing fake video required con­sid­er­able resources and advanced tech­ni­cal exper­tise. Today, all it takes is a few dol­lars and a cou­ple of min­utes. This democ­ra­ti­sa­tion of infor­ma­tion manip­u­la­tion through gen­er­a­tive arti­fi­cial intel­li­gence pos­es new chal­lenges for infor­ma­tion ver­i­fi­ca­tion and trust in the media. Recent data illus­trates the scale of the phe­nom­e­non. Accord­ing to News­Guard, more than 2,089 AI-gen­er­at­ed news sites are cur­rent­ly oper­at­ing with lit­tle or no human over­sight, pub­lish­ing con­tent in 16 lan­guages, includ­ing French, Eng­lish, Ara­bic and Chi­nese. This rep­re­sents a 1,150% increase since April 2023.

“These social media plat­forms play a dual role,” explains Pro­fes­sor Thier­ry Warin, an ana­lyst of eco­nom­ic dynam­ics in the era of big data and a spe­cial­ist in infor­ma­tion issues in the dig­i­tal age. “On the one hand, they democ­ra­tise speech. On the oth­er, they can become a vehi­cle for spread­ing fake news on a large scale.” AI tools them­selves some­times con­tribute to this phe­nom­e­non. A study by News­Guard shows that in August 2025, the lead­ing AI chat­bots relayed false claims in 35% of cas­es, com­pared to 18% the pre­vi­ous year. Per­plex­i­ty went from a 100% false infor­ma­tion refu­ta­tion rate in 2024 to a 46.67% error rate in 2025. Chat­G­PT and Meta have an error rate of 40%.

Deep­fakes rep­re­sent a notable devel­op­ment in the field of con­tent manip­u­la­tion. Accord­ing to Entrust’s 2025 Iden­ti­ty Fraud Report, a deep­fake attack occurs every five min­utes in 2024. Dig­i­tal doc­u­ment forg­eries have increased by 244% com­pared to 2023, while over­all dig­i­tal fraud has grown by 1,600% since 2021. “The Cen­tre for Secu­ri­ty and Emerg­ing Tech­nol­o­gy esti­mates that a basic deep­fake can be pro­duced for a few dol­lars and in less than ten min­utes,” notes Thier­ry Warin. “High-qual­i­ty deep­fakes, on the oth­er hand, can cost between $300 and $20,000 per minute.”

Electoral interference and information manipulation

The year 2024, marked by a large num­ber of elec­tions around the world, saw the emer­gence of sophis­ti­cat­ed dis­in­for­ma­tion cam­paigns. The Dop­pel­gänger cam­paign, orches­trat­ed by pro-Russ­ian actors in the run-up to the 2024 Euro­pean elec­tions, is a notable exam­ple. It com­bined sev­en domains imper­son­at­ing recog­nised media out­lets, 47 inau­then­tic web­sites and 657 arti­cles ampli­fied by thou­sands of auto­mat­ed accounts. The ‘Por­tal Kom­bat’ net­work (also known as ‘Prav­da’) illus­trates a sys­tem­at­ic approach to infor­ma­tion dis­sem­i­na­tion. Accord­ing to VIGINUM, this Moscow-based net­work pub­lished 3.6 mil­lion arti­cles in 2024 on glob­al online plat­forms. With 150 domain names in 46 lan­guages, it pub­lish­es an aver­age of 20,273 arti­cles every 48 hours.

News­Guard test­ed ten of the most pop­u­lar gen­er­a­tive AI mod­els: in 33% of cas­es, these mod­els repeat­ed claims dis­sem­i­nat­ed by the Prav­da net­work. “This con­tent influ­ences arti­fi­cial intel­li­gence sys­tems that rely on this data to gen­er­ate their respons­es,” the report states. This tech­nique, known as ‘LLM groom­ing,’ involves sat­u­rat­ing search results with biased data to influ­ence AI respons­es. “Many recent elec­tions have been marred by dis­in­for­ma­tion cam­paigns,” Thier­ry Warin points out. “Dur­ing the 2016 US pres­i­den­tial elec­tion, the Unit­ed States respond­ed to Russ­ian inter­fer­ence by expelling 35 diplo­mats. With gen­er­a­tive AI, the scale of the phe­nom­e­non has changed.”

The role of personalisation algorithms

Beyond the cre­ation of fake con­tent, social media per­son­al­i­sa­tion algo­rithms con­tribute to the frag­men­ta­tion of the pub­lic sphere. “These sys­tems tend to offer inter­net users con­tent that match­es their pref­er­ences,” explains Pro­fes­sor Warin. “This can cre­ate what are known as echo cham­bers.” Stud­ies show that on Face­book, only about 15% of inter­ac­tions involve expo­sure to diver­gent opin­ions. The con­tent that gen­er­ates the most inter­ac­tions is often ampli­fied by algo­rithms, which can rein­force ide­o­log­i­cal divides.

In response to these devel­op­ments, sev­er­al ini­tia­tives have been put in place. Fin­land and Swe­den score high­est in media lit­er­a­cy, with 74 and 71 points respec­tive­ly on the Euro­pean Media Lit­er­a­cy Index 2023. The Euro­pean Com­mis­sion has adopt­ed the 2022 Strength­ened Code of Prac­tice on Dis­in­for­ma­tion to improve plat­form trans­paren­cy. In Cana­da, the Com­mu­ni­ca­tions Secu­ri­ty Estab­lish­ment pub­lished its 2023 report Cyber Threats to Canada’s Demo­c­ra­t­ic Process – 2023 Update, which analy­ses the use of gen­er­a­tive AI in con­texts of infor­ma­tion interference.

“Tra­di­tion­al coun­ter­mea­sures – human mod­er­a­tion, fact-check­ing, media lit­er­a­cy – must evolve to adapt to the scale of the phe­nom­e­non,” observes Thier­ry Warin. “Tech­no­log­i­cal solu­tions, such as syn­thet­ic con­tent detec­tors and dig­i­tal water­marks, are cur­rent­ly being developed.”

Evolution of the information ecosystem

The year 2025 marks a sig­nif­i­cant change. The non-response rate of AIs to sen­si­tive ques­tions has fall­en to 0%, com­pared to 31% in 2024. On the oth­er hand, their propen­si­ty to repeat false infor­ma­tion has increased. Mod­els now pri­ori­tise respon­sive­ness, which can make them more vul­ner­a­ble to unver­i­fied con­tent online.

“The adage that ‘infor­ma­tion is pow­er’, attrib­uted to Car­di­nal Riche­lieu, remains rel­e­vant,” con­cludes Pro­fes­sor Warin. “From print­ing to tele­vi­sion, each media rev­o­lu­tion has redis­trib­uted the pow­er of infor­ma­tion. With gen­er­a­tive AI, we are wit­ness­ing a major trans­for­ma­tion of this ecosys­tem.” The ques­tion now is how to adapt the mech­a­nisms of ver­i­fi­ca­tion and trust in infor­ma­tion to this new tech­no­log­i­cal real­i­ty. Ongo­ing ini­tia­tives, whether tech­no­log­i­cal, reg­u­la­to­ry or edu­ca­tion­al, aim to address this challenge.

Our world through the lens of science. Every week, in your inbox.

Get the newsletter