The addictive design of social media under fire from regulators
- On February 6th 2026, the European Commission took action against the Chinese social network TikTok over the company's failure to regulate the app's addictive design, constituting a violation of the Digital Services Act.
- Around the same time, in California, a lawsuit accused Meta and YouTube of having "engineered addiction," notably through recommendation AIs, thereby exceeding the scope of mere content hosting.
- Article 5.1.a of the AI Act, now in force in the EU, prohibits AI systems that "deploy subliminal techniques [...] with the objective or effect of materially distorting a person's behaviour."
- The accusations levelled against these recommendation AIs are numerous: exploitation of psychological vulnerabilities, degradation of cognitive faculties, amplification of mass disinformation, and more.
- Recommendation AIs are deployed within an attention economy worth hundreds of billions of dollars per year, funded by hyper-targeted advertising.
On 6th February 2026, the European Commission issued a preliminary opinion on the violation of the Digital Services Act by the Chinese social network TikTok1. The Commission considers that ByteDance, the company behind TikTok, has failed to “take reasonable, proportionate and effective measures to mitigate the risks arising from the addictive design” of its product. It refers to features such as infinite scrolling, autoplay and notifications, but also its recommendation AI, i.e. the algorithm that responds to every click, swipe or scroll by a user looking for new content. This is the first time that a democratic institution has taken on an algorithm at the heart of controlling the flow of information in the modern world.
But this legal attack is not an isolated one. At the end of January 2026, in the provisional version of the draft bill aimed at protecting minors from the risks to which they are exposed through the use of social networks, Article 1 bis was introduced, which also specifically targets recommendation AI. “For the information thus highlighted, the provider may be held liable as a publisher,” states paragraph 6–8‑1-II. At the same time, in California on 9th February 2026, a lawsuit accused Meta and YouTube of “manufacturing addiction”, particularly through recommendation AI, thus going far beyond the simple hosting of content, which is governed by Section 230, which is very lenient towards hosting providers2. In two weeks, recommendation AI has been the subject of more legal scrutiny than in its previous two decades of existence.
Yet for at least a decade, scientists, journalists and human rights advocates have been warning about the civilisational risk posed by these recommendation AI systems, sometimes called curation systems, or sometimes just “the algorithm”, particularly by content creators and consumers on social media. The accusations range from exploiting users’ psychological vulnerabilities to inciting suicide, cognitive impairment, complicity in cyberbullying, amplification of mass disinformation, destabilisation of democratic elections and knowingly contributing to genocide. In this article, we will compile some of the most damning scientific arguments against these AIs.
The overwhelming power of recommendation AI
On 27th November 2025, Science published an article on the monumental impact of content recommendations on users3. The study invited a group of X/Twitter users to install a browser extension. This extension then assigned users to different groups. For some, no intervention took place. For others, the extension would reorder the messages proposed by X/Twitter’s recommendation AI, putting those that were most angry or hateful towards the opposing group first. For others, the extension would prioritise the most calm or politically benevolent content. After only one week of intervention, strikingly, this minimal intervention is already affecting users’ emotional polarisation, i.e. their degree of emotional antipathy towards the opposing clan, to an extent that the authors estimate to be “comparable to three years of attitude change in the United States”.

In a very insidious way, even though users knew they were participating in an experiment (they had to install an extension!), they were unable to guess the nature of the intervention they were the target of. The reordering of published messages was, in a sense, subliminal, or at least below their threshold of consciousness. This echoes paragraph 5.1.a of the AI Act, which came into force in the European Union and prohibits AI systems that “use subliminal techniques […] with the aim or effect of substantially altering a person’s behaviour”. Of course, other conditions are mentioned in this paragraph, and the study in Science is certainly not enough to conclude that recommendation AI should be banned. Nevertheless, it raises dizzying questions about the investigations that should be prioritised to protect their 3 billion users, and about the application of existing regulations.
Moreover, the Science study is not an isolated case. Eleven years earlier, on 2nd June 2014, a study was published in PNAS that already showed this monumental impact of AI4. The intervention was also minimal. It involved removing positive or negative emotional content from the recommendation feed, with a probability of between 10% and 90%. Amazingly, after just one week, simply removing a fraction of one type of emotional content changed users’ behaviour in line with that removal. In other words, the less negative content users saw, the less negative the content they posted. Equally striking was the fact that reducing emotional content, whether positive or negative, reduced user activity by approximately 5 standard deviations.
There is no doubt that this last result is the subject of recurring observations by all social networks, given that user engagement is directly linked to the ability of platforms to show them highly targeted advertisements. We are talking here about a market worth hundreds of billions of dollars a year, ten times larger than that of chatbots today. Despite questionable circular financing, OpenAI is forecasting “only” £12 billion in revenue in 2025. This is a drop in the ocean compared to Meta’s £201 billion. Given the amounts at stake, it is not surprising to see recommendation AI being over-optimised to capture users. In doing so, we can expect them to massively recommend extremely emotionally charged content, both positive and negative, in order to make social media as addictive as possible.
Psychological distress, humanitarian disasters and democratic decline
While the addictive design of recommendation AI is now the subject of legal proceedings by the European Commission and in California, incriminating evidence is mounting regarding its catastrophic role in user well-being, the safety of ethnic populations and the integrity of democracies. Examples include a British survey revealing that nearly half of young people say they would have preferred to grow up without the Internet5, the accusation by the UN and Amnesty International that Facebook knowingly contributed to the genocide of the Rohingya by massively amplifying calls for murder6, and the cancellation of the 2024 presidential elections in Romania following information interference on TikTok.
Incriminating evidence is mounting regarding its catastrophic role in user well-being, the safety of ethnic populations and the integrity of democracies.
Beyond the striking correlation between the widespread adoption of social media around 2012 and the worrying trend in indicators such anxiety and depression rate among young people7, the number of deaths in wars8, and the liberal democracy index9, there are a number of worrying arguments regarding the various negative impacts of social media, such as the testimonies of victims and witnesses, epidemiological studies10, randomised clinical trials11, natural experiments12and leaks of internal documents.
In particular, as in previous cases of merchants of doubt (tobacco, oil, sugar, pesticides, PFAS, etc.), these internal documents reveal that manufacturers concluded that their products were harmful, sometimes decades before researchers came to the same conclusion. The Facebook Files in particular, exposed by whistleblower Frances Haugen in 2021, are extremely damning in terms of the catastrophic implications of recommendation AI for the mental health of young girls, the quality of journalism and political discourse, pushing the latter in particular to systematically adopt divisive tones.
However, unlike other industries, digital giants have the advantage of enormous influence over academic research13and a monopoly on compromising data, which they actively seek to protect even within a legal framework. In this context, any scientist who claims “it has not been proven” without specifying that the lack of evidence is fuelled by the deliberate opacity of the accused seems to me to be offering a problematic, and perhaps even irresponsible, interpretation of our current understanding of the complex relationship between science and industry.
On the contrary, given the mountain of incriminating evidence against the digital giants and their recommendation AI, and knowing, moreover, their disturbing proximity to Chinese authoritarianism or to a US power that President Macron has described as neo-colonial, it seems wise to invest heavily in the design of sovereign alternatives that comply with democratic standards, as proposed by projects such as EU OS, FramaSoft, Mastodon, EuroSky and Tournesol, a non-profit project specialising in the design of recommendation AI for the public good. It also seems wise to adopt a radically more accusatory tone, as the European Commission has done with TikTok.

