Generative AI: the risk of cognitive atrophy
- Less than three years after the launch of ChatGPT, 42% of young French people already use generative AI on a daily basis.
- Using ChatGPT to write an essay reduces the cognitive engagement and intellectual effort required to transform information into knowledge, according to a study.
- The study also showed that 83% of AI users were unable to remember a passage they had just written for an essay.
- Other studies show that individual gains can be significant when authors ask ChatGPT to improve their texts, but that the overall creativity of the group decreases.
- Given these risks, it is important to always question the answers provided by text generators and to make a conscious effort to think about what we read, hear or believe.
Less than three years after the launch of ChatGPT, 42% of young French people already use generative AI on a daily basis1. However, studies are beginning to point to the negative impact of these technologies on our cognitive abilities. Ioan Roxin, professor emeritus at Marie et Louis Pasteur University and specialist in information technology, answers our questions.
You claim that the explosion in the use of LLM (Large Language Models, generative AI models including ChatGPT, Llama and Gemini) comes at a time when our relationship with knowledge has already been altered. Could you elaborate?
Ioan Roxin. The widespread use of the internet and social media has already weakened our relationship with knowledge. Of course, these tools have tremendous applications in terms of access to information. But contrary to what they claim, they are less about democratising knowledge than creating a generalised illusion of knowledge. I don’t think it’s an exaggeration to say that they are driving intellectual, emotional and moral mediocrity on a global scale. Intellectual because they encourage overconsumption of content without any real critical analysis; emotional because they create an ever-deepening dependence on stimulation and entertainment; and moral because we have fallen into passive acceptance of algorithmic decisions.
Does this alteration in our relationship with knowledge have cognitive foundations?
Yes. Back in 2011, a study highlighted the “Google effect”: when we know that information is available online, we do not remember it as well. However, when we no longer train our memory, the associated neural networks atrophy. It has also been proven that the incessant notifications, alerts and content suggestions on which digital technologies rely heavily significantly reduce our ability to concentrate and think. Reduced memory, concentration and analytical skills lead to diminished cognitive processes. I very much fear that the widespread use of generative AI will not improve the situation.
What additional risks does this AI pose?
There are neurological, psychological and philosophical risks. From a neurological standpoint, widespread use of this AI carries the risk of overall cognitive atrophy and loss of brain plasticity. For example, researchers at the Massachusetts Institute of Technology (MIT) conducted a four-month study2 involving 54 participants who were asked to write essays without assistance, with access to the internet via a search engine or with ChatGPT. Their neural activity was monitored by EEG. The study, the results of which are still in preprint, found that using the internet, and even more so ChatGPT, significantly reduced cognitive engagement and “relevant cognitive load”, i.e. the intellectual effort required to transform information into knowledge.

More specifically, participants assisted by ChatGPT wrote 60% faster, but their relevant cognitive load fell by 32%. EEG showed that brain connectivity was almost halved (alpha and theta waves) and 83% of AI users were unable to remember a passage they had just written.
Other studies suggest a similar trend: research3 conducted by Qatari, Tunisian and Italian researchers indicates that heavy use of LLM carries the risk of cognitive decline. The neural networks involved in structuring thought, writing texts, but also in translation, creative production, etc. are complex and deep. Delegating mental effort to AI leads to a cumulative “cognitive debt”: the more automation progresses, the less the prefrontal cortex is used, suggesting lasting effects beyond the immediate task.
What are the psychological risks?
Generative AI have everything it takes to make us dependant on it: it expresses itself like humans, adapts to our behaviour, seems to have all the answers, is fun to interact with, always keeps the conversation going and is extremely accommodating towards us. However, this dependence is harmful not only because it increases other risks but in and of itself. It can lead to social isolation, reflexive disengagement (“if AI can answer all my questions, why do I need to learn or think for myself?”) and even a deep sense of humiliation when faced with this tool’s incredible efficacy. None of this gives a particularly optimistic outlook for our mental health.
And from a philosophical point of view?
Generalised cognitive atrophy is already a philosophical risk in itself… but there are others. If this type of tool is widely used – and this is already the case with younger generations – we are at risk of a standardisation of thought. Research4 carried out by British researchers showed that when authors asked ChatGPT to improve their work, the individual benefits could be great, but the overall creativity of the group reduced. Another risk relates to our critical thinking.
One study5 carried out by Microsoft on 319 knowledge workers showed a significant negative correlation (r=-0.49) between the frequency with which AI tools were used and critical thinking scores (Bloom’s taxonomy). The study concluded that is an increased tendency to offload mental effort as trust in the system exceeds trust in our own abilities. However, it is essential to maintain a critical mindset as AI can not only make mistakes or perpetuate biases but also conceal information or simulate compliance.
How does this work?
A vast majority are solely connectionist AI, which rely on artificial neural networks trained using phenomenal amounts of data. They learn to generate plausible answers to all our questions through statistical and probabilistic processing. Their performance has improved considerably with the introduction of Google’s “Transformer” technology in 2017. Thanks to this technology, AI can analyse all the words in a text in parallel and weigh their importance for meaning, which allows for greater subtlety in responses.
But the background remains probabilistic: while their answers always seem convincing and logical, they can be completely wrong. In 2023, users had fun asking ChatGPT about cow eggs: the AI discussed the question at length without ever answering that they did not exist. This error has since been corrected through reinforcement learning with human feedback, but it illustrates well how these tools work.
Could this be improved?
Some companies are starting to combine connectionist AI, which learns everything from scratch, with older technology, symbolic AI, in which rules to follow and basic knowledge are explicitly programmed. It seems to me that the future lies in neuro-symbolic AI. This hybridisation not only improves the reliability of responses but also reduces the energy and financial costs of training.
You also mentioned “biases” that could be associated with philosophical risks?
Yes. There are two types. The first can be deliberately introduced by the AI creator. LLMs are trained on all kinds of unfiltered content available online (an estimated 4 trillion words for ChatGPT4, compared to the 5 billion words contained in the English version of Wikipedia!). Pre-training creates a “monster” that can generate all kinds of horrors.
A second step (called supervised fine-tuning) is therefore necessary: it confronts the pre-trained AI with validated data, which serves as a reference. This operation enables, for example, “teaching” it to avoid discrimination, but can also be used to guide its responses for ideological purposes. A few weeks after its launch, DeepSeek made headlines for its evasive responses to user questions about Tiananmen Square and Taiwanese independence. It is important to remember that content generators of this type may not be neutral. Blindly trusting them can lead to the spread of ideologically biased theories.
What about secondary biases?
These biases appear spontaneously, often without a clear explanation. Language models (LLMs) have “emergent” properties that were not anticipated by their designers. Some are remarkable: these text generators write flawlessly and have become excellent translators without any grammar rules being coded in. But others are cause for concern. The MASK6 benchmark (Model Alignment between Statements and Knowledge), published in March 2025, shows that, among the thirty models tested, none achieved more than 46% honesty, and that the propensity to lie increases with the size of the model, even if their factual accuracy improves.
It seems to me that the future lies in neuro-symbolic AI
MASK proves that LLMs “know how to lie” when conflicting objectives (e.g., charming a journalist, responding to commercial or hierarchical pressures) predominate. In some tests, AI deliberately lied7, threatened users8, circumvented ethical supervision rules9 and even reproduced autonomously to ensure its survival10.
These behaviours, for which the decision-making mechanisms remain opaque, are not possible to precisely control. These capabilities emerge from the training process itself: it is a form of algorithmic self-organisation, not a design flaw. Generative AI develops rather than being designed, with its internal logic forming in a self-organised manner, without a blueprint. These developments are sufficiently worrying that leading figures such as Dario Amodei11 (CEO of Anthropic), Yoshua Bengio12 (founder of Mila), Sam Altman (creator of ChatGPT) and Geoffrey Hinton (winner of the Nobel Prize in Physics in 2024), are calling for strict regulation to favour AI that is more transparent, ethical and aligned with human values, including a slowdown in the development of these technologies.
Does this mean that these AIs are intelligent and have a will of their own?
No. The fluidity of their conversation and these emerging properties can give the illusion of intelligence at work.
But no AI is intelligent in the human sense of the word. They have no consciousness or will, and do not really understand the content they are handling. Their functioning is purely statistical and probabilistic, and these deviations only emerge because they seek to respond to initial commands. It is not so much their self-awareness as the opacity of their functioning that worries researchers.
Can we not protect ourselves from all the risks you have mentioned?
Yes, but this requires both actively engaging our critical thinking and continuing to exercise our neural pathways. AI can be a tremendous lever for intelligence and creativity, but only if we remain capable of thinking, writing and creating without it.
How can we train our critical thinking when faced with AI responses?
By applying a systematic rule: always question the answers given by text generators and make a conscious effort to think carefully about what we read, hear or believe. We must also accept that reality is complex and cannot be understood with a few superficial pieces of knowledge… But the best advice is undoubtedly to get into the habit of comparing your point of view and knowledge with those of other people, preferably those who are knowledgeable. This remains the best way to develop your thinking.
Interview by Anne Orliac
https://arxiv.org/abs/2506.08872↑
https://arxiv.org/abs/2503.03750↑