Home / Chroniques / How AI is affecting quality of factual information
People watching a lot of retro televisions. Propaganda and fake news concept. Politicians manipulate society with help of public television. Created with Generative AI
Généré par l'IA / Generated using AI
π Digital π Society

How AI is affecting quality of factual information

Thierry Warin
Thierry Warin
Professor of Data Science for Global Transformations at HEC Montreal
Key takeaways
  • According to NewsGuard, more than 2,089 AI-generated news sites are currently operating, publishing content in 16 languages.
  • In August 2025, leading AI chatbots relayed false claims in 35% of cases, compared to 18% the previous year.
  • According to the Entrust Identity Fraud Report's 2025 projection, a deepfake attack will occur every five minutes by 2024.
  • Furthermore, social media personalization algorithms contribute to the fragmentation of the public sphere, creating echo chambers.
  • It is necessary to evolve human countermeasures, such as moderation and media literacy, to adapt to this phenomenon.

A few years ago, crea­ting a convin­cing fake video requi­red consi­de­rable resources and advan­ced tech­ni­cal exper­tise. Today, all it takes is a few dol­lars and a couple of minutes. This demo­cra­ti­sa­tion of infor­ma­tion mani­pu­la­tion through gene­ra­tive arti­fi­cial intel­li­gence poses new chal­lenges for infor­ma­tion veri­fi­ca­tion and trust in the media. Recent data illus­trates the scale of the phe­no­me­non. Accor­ding to News­Guard, more than 2,089 AI-gene­ra­ted news sites are cur­rent­ly ope­ra­ting with lit­tle or no human over­sight, publi­shing content in 16 lan­guages, inclu­ding French, English, Ara­bic and Chi­nese. This repre­sents a 1,150% increase since April 2023.

“These social media plat­forms play a dual role,” explains Pro­fes­sor Thier­ry Warin, an ana­lyst of eco­no­mic dyna­mics in the era of big data and a spe­cia­list in infor­ma­tion issues in the digi­tal age. “On the one hand, they demo­cra­tise speech. On the other, they can become a vehicle for sprea­ding fake news on a large scale.” AI tools them­selves some­times contri­bute to this phe­no­me­non. A stu­dy by News­Guard shows that in August 2025, the lea­ding AI chat­bots relayed false claims in 35% of cases, com­pa­red to 18% the pre­vious year. Per­plexi­ty went from a 100% false infor­ma­tion refu­ta­tion rate in 2024 to a 46.67% error rate in 2025. ChatGPT and Meta have an error rate of 40%.

Deep­fakes represent a notable deve­lop­ment in the field of content mani­pu­la­tion. Accor­ding to Entrust’s 2025 Iden­ti­ty Fraud Report, a deep­fake attack occurs eve­ry five minutes in 2024. Digi­tal docu­ment for­ge­ries have increa­sed by 244% com­pa­red to 2023, while ove­rall digi­tal fraud has grown by 1,600% since 2021. “The Centre for Secu­ri­ty and Emer­ging Tech­no­lo­gy esti­mates that a basic deep­fake can be pro­du­ced for a few dol­lars and in less than ten minutes,” notes Thier­ry Warin. “High-qua­li­ty deep­fakes, on the other hand, can cost bet­ween $300 and $20,000 per minute.”

Electoral interference and information manipulation

The year 2024, mar­ked by a large num­ber of elec­tions around the world, saw the emer­gence of sophis­ti­ca­ted dis­in­for­ma­tion cam­pai­gns. The Dop­pelgän­ger cam­pai­gn, orches­tra­ted by pro-Rus­sian actors in the run-up to the 2024 Euro­pean elec­tions, is a notable example. It com­bi­ned seven domains imper­so­na­ting reco­gni­sed media out­lets, 47 inau­then­tic web­sites and 657 articles ampli­fied by thou­sands of auto­ma­ted accounts. The ‘Por­tal Kom­bat’ net­work (also known as ‘Prav­da’) illus­trates a sys­te­ma­tic approach to infor­ma­tion dis­se­mi­na­tion. Accor­ding to VIGINUM, this Mos­cow-based net­work publi­shed 3.6 mil­lion articles in 2024 on glo­bal online plat­forms. With 150 domain names in 46 lan­guages, it publishes an ave­rage of 20,273 articles eve­ry 48 hours.

News­Guard tes­ted ten of the most popu­lar gene­ra­tive AI models : in 33% of cases, these models repea­ted claims dis­se­mi­na­ted by the Prav­da net­work. “This content influences arti­fi­cial intel­li­gence sys­tems that rely on this data to gene­rate their res­ponses,” the report states. This tech­nique, known as ‘LLM groo­ming,’ involves satu­ra­ting search results with bia­sed data to influence AI res­ponses. “Many recent elec­tions have been mar­red by dis­in­for­ma­tion cam­pai­gns,” Thier­ry Warin points out. “During the 2016 US pre­si­den­tial elec­tion, the Uni­ted States respon­ded to Rus­sian inter­fe­rence by expel­ling 35 diplo­mats. With gene­ra­tive AI, the scale of the phe­no­me­non has changed.”

The role of personalisation algorithms

Beyond the crea­tion of fake content, social media per­so­na­li­sa­tion algo­rithms contri­bute to the frag­men­ta­tion of the public sphere. “These sys­tems tend to offer inter­net users content that matches their pre­fe­rences,” explains Pro­fes­sor Warin. “This can create what are known as echo cham­bers.” Stu­dies show that on Face­book, only about 15% of inter­ac­tions involve expo­sure to divergent opi­nions. The content that gene­rates the most inter­ac­tions is often ampli­fied by algo­rithms, which can rein­force ideo­lo­gi­cal divides.

In res­ponse to these deve­lop­ments, seve­ral ini­tia­tives have been put in place. Fin­land and Swe­den score highest in media lite­ra­cy, with 74 and 71 points res­pec­ti­ve­ly on the Euro­pean Media Lite­ra­cy Index 2023. The Euro­pean Com­mis­sion has adop­ted the 2022 Streng­the­ned Code of Prac­tice on Dis­in­for­ma­tion to improve plat­form trans­pa­ren­cy. In Cana­da, the Com­mu­ni­ca­tions Secu­ri­ty Esta­blish­ment publi­shed its 2023 report Cyber Threats to Cana­da’s Demo­cra­tic Pro­cess – 2023 Update, which ana­lyses the use of gene­ra­tive AI in contexts of infor­ma­tion interference.

“Tra­di­tio­nal coun­ter­mea­sures – human mode­ra­tion, fact-che­cking, media lite­ra­cy – must evolve to adapt to the scale of the phe­no­me­non,” observes Thier­ry Warin. “Tech­no­lo­gi­cal solu­tions, such as syn­the­tic content detec­tors and digi­tal water­marks, are cur­rent­ly being developed.”

Evolution of the information ecosystem

The year 2025 marks a signi­fi­cant change. The non-res­ponse rate of AIs to sen­si­tive ques­tions has fal­len to 0%, com­pa­red to 31% in 2024. On the other hand, their pro­pen­si­ty to repeat false infor­ma­tion has increa­sed. Models now prio­ri­tise res­pon­si­ve­ness, which can make them more vul­ne­rable to unve­ri­fied content online.

“The adage that ‘infor­ma­tion is power’, attri­bu­ted to Car­di­nal Riche­lieu, remains rele­vant,” concludes Pro­fes­sor Warin. “From prin­ting to tele­vi­sion, each media revo­lu­tion has redis­tri­bu­ted the power of infor­ma­tion. With gene­ra­tive AI, we are wit­nes­sing a major trans­for­ma­tion of this eco­sys­tem.” The ques­tion now is how to adapt the mecha­nisms of veri­fi­ca­tion and trust in infor­ma­tion to this new tech­no­lo­gi­cal rea­li­ty. Ongoing ini­tia­tives, whe­ther tech­no­lo­gi­cal, regu­la­to­ry or edu­ca­tio­nal, aim to address this challenge.

Support accurate information rooted in the scientific method.

Donate