Home / Chroniques / The addictive design of social media under fire from regulators 
Généré par l'IA / Generated using AI
π Society π Digital

The addictive design of social media under fire from regulators 

Lê Nguyên Hoang
Lê Nguyên Hoang
PhD in Mathematics (Polytechnique Montréal), Co-Founder and Executive Director of Tournesol.app
Key takeaways
  • On February 6th 2026, the European Commission took action against the Chinese social network TikTok over the company's failure to regulate the app's addictive design, constituting a violation of the Digital Services Act.
  • Around the same time, in California, a lawsuit accused Meta and YouTube of having "engineered addiction," notably through recommendation AIs, thereby exceeding the scope of mere content hosting.
  • Article 5.1.a of the AI Act, now in force in the EU, prohibits AI systems that "deploy subliminal techniques [...] with the objective or effect of materially distorting a person's behaviour."
  • The accusations levelled against these recommendation AIs are numerous: exploitation of psychological vulnerabilities, degradation of cognitive faculties, amplification of mass disinformation, and more.
  • Recommendation AIs are deployed within an attention economy worth hundreds of billions of dollars per year, funded by hyper-targeted advertising.

On 6th Feb­ru­ary 2026, the European Com­mis­sion issued a pre­lim­in­ary opin­ion on the viol­a­tion of the Digit­al Ser­vices Act by the Chinese social net­work Tik­Tok1. The Com­mis­sion con­siders that Byte­Dance, the com­pany behind Tik­Tok, has failed to “take reas­on­able, pro­por­tion­ate and effect­ive meas­ures to mit­ig­ate the risks arising from the addict­ive design” of its product. It refers to fea­tures such as infin­ite scrolling, auto­play and noti­fic­a­tions, but also its recom­mend­a­tion AI, i.e. the algorithm that responds to every click, swipe or scroll by a user look­ing for new con­tent. This is the first time that a demo­crat­ic insti­tu­tion has taken on an algorithm at the heart of con­trolling the flow of inform­a­tion in the mod­ern world.
 
But this leg­al attack is not an isol­ated one. At the end of Janu­ary 2026, in the pro­vi­sion­al ver­sion of the draft bill aimed at pro­tect­ing minors from the risks to which they are exposed through the use of social net­works, Art­icle 1 bis was intro­duced, which also spe­cific­ally tar­gets recom­mend­a­tion AI. “For the inform­a­tion thus high­lighted, the pro­vider may be held liable as a pub­lish­er,” states para­graph 6–8‑1-II. At the same time, in Cali­for­nia on 9th Feb­ru­ary 2026, a law­suit accused Meta and You­Tube of “man­u­fac­tur­ing addic­tion”, par­tic­u­larly through recom­mend­a­tion AI, thus going far bey­ond the simple host­ing of con­tent, which is gov­erned by Sec­tion 230, which is very leni­ent towards host­ing pro­viders2. In two weeks, recom­mend­a­tion AI has been the sub­ject of more leg­al scru­tiny than in its pre­vi­ous two dec­ades of exist­ence.
 
Yet for at least a dec­ade, sci­ent­ists, journ­al­ists and human rights advoc­ates have been warn­ing about the civil­isa­tion­al risk posed by these recom­mend­a­tion AI sys­tems, some­times called cur­a­tion sys­tems, or some­times just “the algorithm”, par­tic­u­larly by con­tent cre­at­ors and con­sumers on social media. The accus­a­tions range from exploit­ing users’ psy­cho­lo­gic­al vul­ner­ab­il­it­ies to incit­ing sui­cide, cog­nit­ive impair­ment, com­pli­city in cyber­bul­ly­ing, amp­li­fic­a­tion of mass dis­in­form­a­tion, destabil­isa­tion of demo­crat­ic elec­tions and know­ingly con­trib­ut­ing to gen­o­cide. In this art­icle, we will com­pile some of the most damning sci­entif­ic argu­ments against these AIs.

The overwhelming power of recommendation AI

On 27th Novem­ber 2025, Sci­ence pub­lished an art­icle on the monu­ment­al impact of con­tent recom­mend­a­tions on users3. The study invited a group of X/Twitter users to install a browser exten­sion. This exten­sion then assigned users to dif­fer­ent groups. For some, no inter­ven­tion took place. For oth­ers, the exten­sion would reorder the mes­sages pro­posed by X/Twitter’s recom­mend­a­tion AI, put­ting those that were most angry or hate­ful towards the oppos­ing group first. For oth­ers, the exten­sion would pri­or­it­ise the most calm or polit­ic­ally bene­vol­ent con­tent. After only one week of inter­ven­tion, strik­ingly, this min­im­al inter­ven­tion is already affect­ing users’ emo­tion­al polar­isa­tion, i.e. their degree of emo­tion­al anti­pathy towards the oppos­ing clan, to an extent that the authors estim­ate to be “com­par­able to three years of atti­tude change in the United States”.

Rather than sanctions against TikTok, YouTube, and Meta, it could be wiser to invest in the development of sovereign alternatives that comply with democratic standards.

In a very insi­di­ous way, even though users knew they were par­ti­cip­at­ing in an exper­i­ment (they had to install an exten­sion!), they were unable to guess the nature of the inter­ven­tion they were the tar­get of. The reorder­ing of pub­lished mes­sages was, in a sense, sub­lim­in­al, or at least below their threshold of con­scious­ness. This echoes para­graph 5.1.a of the AI Act, which came into force in the European Uni­on and pro­hib­its AI sys­tems that “use sub­lim­in­al tech­niques […] with the aim or effect of sub­stan­tially alter­ing a person’s beha­viour”. Of course, oth­er con­di­tions are men­tioned in this para­graph, and the study in Sci­ence is cer­tainly not enough to con­clude that recom­mend­a­tion AI should be banned. Nev­er­the­less, it raises dizzy­ing ques­tions about the invest­ig­a­tions that should be pri­or­it­ised to pro­tect their 3 bil­lion users, and about the applic­a­tion of exist­ing reg­u­la­tions.
 
Moreover, the Sci­ence study is not an isol­ated case. Elev­en years earli­er, on 2nd June 2014, a study was pub­lished in PNAS that already showed this monu­ment­al impact of AI4. The inter­ven­tion was also min­im­al. It involved remov­ing pos­it­ive or neg­at­ive emo­tion­al con­tent from the recom­mend­a­tion feed, with a prob­ab­il­ity of between 10% and 90%. Amaz­ingly, after just one week, simply remov­ing a frac­tion of one type of emo­tion­al con­tent changed users’ beha­viour in line with that remov­al. In oth­er words, the less neg­at­ive con­tent users saw, the less neg­at­ive the con­tent they pos­ted. Equally strik­ing was the fact that redu­cing emo­tion­al con­tent, wheth­er pos­it­ive or neg­at­ive, reduced user activ­ity by approx­im­ately 5 stand­ard devi­ations.
 
There is no doubt that this last res­ult is the sub­ject of recur­ring obser­va­tions by all social net­works, giv­en that user engage­ment is dir­ectly linked to the abil­ity of plat­forms to show them highly tar­geted advert­ise­ments. We are talk­ing here about a mar­ket worth hun­dreds of bil­lions of dol­lars a year, ten times lar­ger than that of chat­bots today. Des­pite ques­tion­able cir­cu­lar fin­an­cing, OpenAI is fore­cast­ing “only” £12 bil­lion in rev­en­ue in 2025. This is a drop in the ocean com­pared to Meta’s £201 bil­lion. Giv­en the amounts at stake, it is not sur­pris­ing to see recom­mend­a­tion AI being over-optim­ised to cap­ture users. In doing so, we can expect them to massively recom­mend extremely emo­tion­ally charged con­tent, both pos­it­ive and neg­at­ive, in order to make social media as addict­ive as possible.

Psychological distress, humanitarian disasters and democratic decline

While the addict­ive design of recom­mend­a­tion AI is now the sub­ject of leg­al pro­ceed­ings by the European Com­mis­sion and in Cali­for­nia, incrim­in­at­ing evid­ence is mount­ing regard­ing its cata­stroph­ic role in user well-being, the safety of eth­nic pop­u­la­tions and the integ­rity of demo­cra­cies. Examples include a Brit­ish sur­vey reveal­ing that nearly half of young people say they would have pre­ferred to grow up without the Inter­net5, the accus­a­tion by the UN and Amnesty Inter­na­tion­al that Face­book know­ingly con­trib­uted to the gen­o­cide of the Rohingya by massively amp­li­fy­ing calls for murder6, and the can­cel­la­tion of the 2024 pres­id­en­tial elec­tions in Romania fol­low­ing inform­a­tion inter­fer­ence on TikTok.

Incrim­in­at­ing evid­ence is mount­ing regard­ing its cata­stroph­ic role in user well-being, the safety of eth­nic pop­u­la­tions and the integ­rity of democracies. 

Bey­ond the strik­ing cor­rel­a­tion between the wide­spread adop­tion of social media around 2012 and the wor­ry­ing trend in indic­at­ors such anxi­ety and depres­sion rate among young people7, the num­ber of deaths in wars8, and the lib­er­al demo­cracy index9, there are a num­ber of wor­ry­ing argu­ments regard­ing the vari­ous neg­at­ive impacts of social media, such as the testi­mon­ies of vic­tims and wit­nesses, epi­demi­olo­gic­al stud­ies10, ran­dom­ised clin­ic­al tri­als11, nat­ur­al exper­i­ments12and leaks of intern­al doc­u­ments.
 
In par­tic­u­lar, as in pre­vi­ous cases of mer­chants of doubt (tobacco, oil, sug­ar, pesti­cides, PFAS, etc.), these intern­al doc­u­ments reveal that man­u­fac­tur­ers con­cluded that their products were harm­ful, some­times dec­ades before research­ers came to the same con­clu­sion. The Face­book Files in par­tic­u­lar, exposed by whis­tleblower Frances Hau­gen in 2021, are extremely damning in terms of the cata­stroph­ic implic­a­tions of recom­mend­a­tion AI for the men­tal health of young girls, the qual­ity of journ­al­ism and polit­ic­al dis­course, push­ing the lat­ter in par­tic­u­lar to sys­tem­at­ic­ally adopt divis­ive tones.
 
How­ever, unlike oth­er indus­tries, digit­al giants have the advant­age of enorm­ous influ­ence over aca­dem­ic research13and a mono­poly on com­prom­ising data, which they act­ively seek to pro­tect even with­in a leg­al frame­work. In this con­text, any sci­ent­ist who claims “it has not been proven” without spe­cify­ing that the lack of evid­ence is fuelled by the delib­er­ate opa­city of the accused seems to me to be offer­ing a prob­lem­at­ic, and per­haps even irre­spons­ible, inter­pret­a­tion of our cur­rent under­stand­ing of the com­plex rela­tion­ship between sci­ence and industry.
 
On the con­trary, giv­en the moun­tain of incrim­in­at­ing evid­ence against the digit­al giants and their recom­mend­a­tion AI, and know­ing, moreover, their dis­turb­ing prox­im­ity to Chinese author­it­ari­an­ism or to a US power that Pres­id­ent Mac­ron has described as neo-colo­ni­al, it seems wise to invest heav­ily in the design of sov­er­eign altern­at­ives that com­ply with demo­crat­ic stand­ards, as pro­posed by pro­jects such as EU OSFramaSoftMas­to­donEuroSky and Tournesol, a non-profit pro­ject spe­cial­ising in the design of recom­mend­a­tion AI for the pub­lic good. It also seems wise to adopt a rad­ic­ally more accus­at­ory tone, as the European Com­mis­sion has done with TikTok.

1https://​digit​al​-strategy​.ec​.europa​.eu/​e​n​/​n​e​w​s​/​c​o​m​m​i​s​s​i​o​n​-​p​r​e​l​i​m​i​n​a​r​i​l​y​-​f​i​n​d​s​-​t​i​k​t​o​k​s​-​a​d​d​i​c​t​i​v​e​-​d​e​s​i​g​n​-​b​r​e​a​c​h​-​d​i​g​i​t​a​l​-​s​e​r​v​i​c​e​s-act
2https://​fr​.wiki​pe​dia​.org/​w​i​k​i​/​C​o​m​m​u​n​i​c​a​t​i​o​n​s​_​D​e​c​e​n​c​y​_​A​c​t​#​S​e​c​t​i​o​n_230
3https://​www​.sci​ence​.org/​d​o​i​/​1​0​.​1​1​2​6​/​s​c​i​e​n​c​e​.​a​d​u5584
4https://​www​.pnas​.org/​d​o​i​/​1​0​.​1​0​7​3​/​p​n​a​s​.​1​3​2​0​0​40111
5https://​www​.the​guard​i​an​.com/​t​e​c​h​n​o​l​o​g​y​/​2​0​2​5​/​m​a​y​/​2​0​/​a​l​m​o​s​t​-​h​a​l​f​-​o​f​-​y​o​u​n​g​-​p​e​o​p​l​e​-​w​o​u​l​d​-​p​r​e​f​e​r​-​a​-​w​o​r​l​d​-​w​i​t​h​o​u​t​-​i​n​t​e​r​n​e​t​-​u​k​-​s​t​u​d​y​-​finds
6https://​www​.amnesty​.org/​e​n​/​l​a​t​e​s​t​/​n​e​w​s​/​2​0​2​2​/​0​9​/​m​y​a​n​m​a​r​-​f​a​c​e​b​o​o​k​s​-​s​y​s​t​e​m​s​-​p​r​o​m​o​t​e​d​-​v​i​o​l​e​n​c​e​-​a​g​a​i​n​s​t​-​r​o​h​i​n​g​y​a​-​m​e​t​a​-​o​w​e​s​-​r​e​p​a​r​a​t​i​o​n​s​-​n​e​w​-​r​e​port/
7https://​jonathan​haidt​.com/​s​o​c​i​a​l​-​m​edia/
8https://ourworldindata.org/explorers/countries-in-conflict-data?tab=line&country=~OWID_WRL&Measure=Conflict+deaths&Conflict+type=All+armed+conflicts&Conflict+sub-type=Across+all+sub-types
9https://v‑dem.net/documents/61/v‑dem-dr__2025_lowres_v2.pdf
10https://​pubmed​.ncbi​.nlm​.nih​.gov/​3​1​5​0​9167/
11https://​www​.sci​en​ce​dir​ect​.com/​s​c​i​e​n​c​e​/​a​r​t​i​c​l​e​/​p​i​i​/​S​2​6​6​6​5​6​0​3​2​5​0​00714
12https://​pubmed​.ncbi​.nlm​.nih​.gov/​3​5​8​3​3231/
13https://​dl​.acm​.org/​d​o​i​/​1​0​.​1​1​4​5​/​3​4​6​1​7​0​2​.​3​4​62563

Support accurate information rooted in the scientific method.

Donate