Home / Chroniques / Generative AI: the risk of cognitive atrophy
A person observes a glowing brain display on a screen
Généré par l'IA / Generated using AI
π Neuroscience π Digital

Generative AI: the risk of cognitive atrophy

ioan_roxin – copie
Ioan Roxin
Professor Emeritus at Marie et Louis Pasteur University
Key takeaways
  • Less than three years after the launch of ChatGPT, 42% of young French people already use generative AI on a daily basis.
  • Using ChatGPT to write an essay reduces the cognitive engagement and intellectual effort required to transform information into knowledge, according to a study.
  • The study also showed that 83% of AI users were unable to remember a passage they had just written for an essay.
  • Other studies show that individual gains can be significant when authors ask ChatGPT to improve their texts, but that the overall creativity of the group decreases.
  • Given these risks, it is important to always question the answers provided by text generators and to make a conscious effort to think about what we read, hear or believe.

Less than three years after the launch of Chat­G­PT, 42% of young French people already use gen­er­at­ive AI on a daily basis1. How­ever, stud­ies are begin­ning to point to the neg­at­ive impact of these tech­no­lo­gies on our cog­nit­ive abil­it­ies. Ioan Rox­in, pro­fess­or emer­it­us at Mar­ie et Louis Pas­teur Uni­ver­sity and spe­cial­ist in inform­a­tion tech­no­logy, answers our questions.

You claim that the explosion in the use of LLM (Large Language Models, generative AI models including ChatGPT, Llama and Gemini) comes at a time when our relationship with knowledge has already been altered. Could you elaborate?

Ioan Rox­in. The wide­spread use of the inter­net and social media has already weakened our rela­tion­ship with know­ledge. Of course, these tools have tre­mend­ous applic­a­tions in terms of access to inform­a­tion. But con­trary to what they claim, they are less about demo­crat­ising know­ledge than cre­at­ing a gen­er­al­ised illu­sion of know­ledge. I don’t think it’s an exag­ger­a­tion to say that they are driv­ing intel­lec­tu­al, emo­tion­al and mor­al mediocrity on a glob­al scale. Intel­lec­tu­al because they encour­age over­con­sump­tion of con­tent without any real crit­ic­al ana­lys­is; emo­tion­al because they cre­ate an ever-deep­en­ing depend­ence on stim­u­la­tion and enter­tain­ment; and mor­al because we have fallen into pass­ive accept­ance of algorithmic decisions.

Does this alteration in our relationship with knowledge have cognitive foundations?

Yes. Back in 2011, a study high­lighted the “Google effect”: when we know that inform­a­tion is avail­able online, we do not remem­ber it as well. How­ever, when we no longer train our memory, the asso­ci­ated neur­al net­works atrophy. It has also been proven that the incess­ant noti­fic­a­tions, alerts and con­tent sug­ges­tions on which digit­al tech­no­lo­gies rely heav­ily sig­ni­fic­antly reduce our abil­ity to con­cen­trate and think. Reduced memory, con­cen­tra­tion and ana­lyt­ic­al skills lead to dimin­ished cog­nit­ive pro­cesses. I very much fear that the wide­spread use of gen­er­at­ive AI will not improve the situation.

What additional risks does this AI pose?

There are neur­o­lo­gic­al, psy­cho­lo­gic­al and philo­soph­ic­al risks. From a neur­o­lo­gic­al stand­point, wide­spread use of this AI car­ries the risk of over­all cog­nit­ive atrophy and loss of brain plas­ti­city. For example, research­ers at the Mas­sachu­setts Insti­tute of Tech­no­logy (MIT) con­duc­ted a four-month study2 involving 54 par­ti­cipants who were asked to write essays without assist­ance, with access to the inter­net via a search engine or with Chat­G­PT. Their neur­al activ­ity was mon­itored by EEG. The study, the res­ults of which are still in pre­print, found that using the inter­net, and even more so Chat­G­PT, sig­ni­fic­antly reduced cog­nit­ive engage­ment and “rel­ev­ant cog­nit­ive load”, i.e. the intel­lec­tu­al effort required to trans­form inform­a­tion into knowledge. 

More spe­cific­ally, par­ti­cipants assisted by Chat­G­PT wrote 60% faster, but their rel­ev­ant cog­nit­ive load fell by 32%. EEG showed that brain con­nectiv­ity was almost halved (alpha and theta waves) and 83% of AI users were unable to remem­ber a pas­sage they had just written. 

Oth­er stud­ies sug­gest a sim­il­ar trend: research3 con­duc­ted by Qatari, Tunisi­an and Itali­an research­ers indic­ates that heavy use of LLM car­ries the risk of cog­nit­ive decline. The neur­al net­works involved in struc­tur­ing thought, writ­ing texts, but also in trans­la­tion, cre­at­ive pro­duc­tion, etc. are com­plex and deep. Del­eg­at­ing men­tal effort to AI leads to a cumu­lat­ive “cog­nit­ive debt”: the more auto­ma­tion pro­gresses, the less the pre­front­al cor­tex is used, sug­gest­ing last­ing effects bey­ond the imme­di­ate task.

What are the psychological risks?

Gen­er­at­ive AI have everything it takes to make us depend­ant on it: it expresses itself like humans, adapts to our beha­viour, seems to have all the answers, is fun to inter­act with, always keeps the con­ver­sa­tion going and is extremely accom­mod­at­ing towards us. How­ever, this depend­ence is harm­ful not only because it increases oth­er risks but in and of itself. It can lead to social isol­a­tion, reflex­ive dis­en­gage­ment (“if AI can answer all my ques­tions, why do I need to learn or think for myself?”) and even a deep sense of humi­li­ation when faced with this tool’s incred­ible effic­acy. None of this gives a par­tic­u­larly optim­ist­ic out­look for our men­tal health.

And from a philosophical point of view?

Gen­er­al­ised cog­nit­ive atrophy is already a philo­soph­ic­al risk in itself… but there are oth­ers. If this type of tool is widely used – and this is already the case with young­er gen­er­a­tions – we are at risk of a stand­ard­isa­tion of thought. Research4 car­ried out by Brit­ish research­ers showed that when authors asked Chat­G­PT to improve their work, the indi­vidu­al bene­fits could be great, but the over­all cre­ativ­ity of the group reduced. Anoth­er risk relates to our crit­ic­al thinking. 

One study5 car­ried out by Microsoft on 319 know­ledge work­ers showed a sig­ni­fic­ant neg­at­ive cor­rel­a­tion (r=-0.49) between the fre­quency with which AI tools were used and crit­ic­al think­ing scores (Bloom’s tax­onomy). The study con­cluded that is an increased tend­ency to off­load men­tal effort as trust in the sys­tem exceeds trust in our own abil­it­ies. How­ever, it is essen­tial to main­tain a crit­ic­al mind­set as AI can not only make mis­takes or per­petu­ate biases but also con­ceal inform­a­tion or sim­u­late compliance.

How does this work?

A vast major­ity are solely con­nec­tion­ist AI, which rely on arti­fi­cial neur­al net­works trained using phe­nom­en­al amounts of data. They learn to gen­er­ate plaus­ible answers to all our ques­tions through stat­ist­ic­al and prob­ab­il­ist­ic pro­cessing. Their per­form­ance has improved con­sid­er­ably with the intro­duc­tion of Google’s “Trans­former” tech­no­logy in 2017. Thanks to this tech­no­logy, AI can ana­lyse all the words in a text in par­al­lel and weigh their import­ance for mean­ing, which allows for great­er sub­tlety in responses. 

But the back­ground remains prob­ab­il­ist­ic: while their answers always seem con­vin­cing and logic­al, they can be com­pletely wrong. In 2023, users had fun ask­ing Chat­G­PT about cow eggs: the AI dis­cussed the ques­tion at length without ever answer­ing that they did not exist. This error has since been cor­rec­ted through rein­force­ment learn­ing with human feed­back, but it illus­trates well how these tools work.

Could this be improved?

Some com­pan­ies are start­ing to com­bine con­nec­tion­ist AI, which learns everything from scratch, with older tech­no­logy, sym­bol­ic AI, in which rules to fol­low and basic know­ledge are expli­citly pro­grammed. It seems to me that the future lies in neuro-sym­bol­ic AI. This hybrid­isa­tion not only improves the reli­ab­il­ity of responses but also reduces the energy and fin­an­cial costs of training.

You also mentioned “biases” that could be associated with philosophical risks?

Yes. There are two types. The first can be delib­er­ately intro­duced by the AI cre­at­or. LLMs are trained on all kinds of unfiltered con­tent avail­able online (an estim­ated 4 tril­lion words for ChatGPT4, com­pared to the 5 bil­lion words con­tained in the Eng­lish ver­sion of Wiki­pe­dia!). Pre-train­ing cre­ates a “mon­ster” that can gen­er­ate all kinds of horrors.

A second step (called super­vised fine-tun­ing) is there­fore neces­sary: it con­fronts the pre-trained AI with val­id­ated data, which serves as a ref­er­ence. This oper­a­tion enables, for example, “teach­ing” it to avoid dis­crim­in­a­tion, but can also be used to guide its responses for ideo­lo­gic­al pur­poses. A few weeks after its launch, Deep­Seek made head­lines for its evas­ive responses to user ques­tions about Tianan­men Square and Taiwanese inde­pend­ence. It is import­ant to remem­ber that con­tent gen­er­at­ors of this type may not be neut­ral. Blindly trust­ing them can lead to the spread of ideo­lo­gic­ally biased theories.

What about secondary biases?

These biases appear spon­tan­eously, often without a clear explan­a­tion. Lan­guage mod­els (LLMs) have “emer­gent” prop­er­ties that were not anti­cip­ated by their design­ers. Some are remark­able: these text gen­er­at­ors write flaw­lessly and have become excel­lent trans­lat­ors without any gram­mar rules being coded in. But oth­ers are cause for con­cern. The MASK6 bench­mark (Mod­el Align­ment between State­ments and Know­ledge), pub­lished in March 2025, shows that, among the thirty mod­els tested, none achieved more than 46% hon­esty, and that the propensity to lie increases with the size of the mod­el, even if their fac­tu­al accur­acy improves. 

It seems to me that the future lies in neuro-sym­bol­ic AI

MASK proves that LLMs “know how to lie” when con­flict­ing object­ives (e.g., charm­ing a journ­al­ist, respond­ing to com­mer­cial or hier­arch­ic­al pres­sures) pre­dom­in­ate. In some tests, AI delib­er­ately lied7, threatened users8, cir­cum­ven­ted eth­ic­al super­vi­sion rules9 and even repro­duced autonom­ously to ensure its sur­viv­al10.

These beha­viours, for which the decision-mak­ing mech­an­isms remain opaque, are not pos­sible to pre­cisely con­trol. These cap­ab­il­it­ies emerge from the train­ing pro­cess itself: it is a form of algorithmic self-organ­isa­tion, not a design flaw. Gen­er­at­ive AI devel­ops rather than being designed, with its intern­al logic form­ing in a self-organ­ised man­ner, without a blue­print. These devel­op­ments are suf­fi­ciently wor­ry­ing that lead­ing fig­ures such as Dario Amod­ei11 (CEO of Anthrop­ic), Yoshua Ben­gio12 (founder of Mila), Sam Alt­man (cre­at­or of Chat­G­PT) and Geof­frey Hin­ton (win­ner of the Nobel Prize in Phys­ics in 2024), are call­ing for strict reg­u­la­tion to favour AI that is more trans­par­ent, eth­ic­al and aligned with human val­ues, includ­ing a slow­down in the devel­op­ment of these technologies.

Does this mean that these AIs are intelligent and have a will of their own?

No. The fluid­ity of their con­ver­sa­tion and these emer­ging prop­er­ties can give the illu­sion of intel­li­gence at work.

But no AI is intel­li­gent in the human sense of the word. They have no con­scious­ness or will, and do not really under­stand the con­tent they are hand­ling. Their func­tion­ing is purely stat­ist­ic­al and prob­ab­il­ist­ic, and these devi­ations only emerge because they seek to respond to ini­tial com­mands. It is not so much their self-aware­ness as the opa­city of their func­tion­ing that wor­ries researchers.

Can we not protect ourselves from all the risks you have mentioned?

Yes, but this requires both act­ively enga­ging our crit­ic­al think­ing and con­tinu­ing to exer­cise our neur­al path­ways. AI can be a tre­mend­ous lever for intel­li­gence and cre­ativ­ity, but only if we remain cap­able of think­ing, writ­ing and cre­at­ing without it.

How can we train our critical thinking when faced with AI responses?

By apply­ing a sys­tem­at­ic rule: always ques­tion the answers giv­en by text gen­er­at­ors and make a con­scious effort to think care­fully about what we read, hear or believe. We must also accept that real­ity is com­plex and can­not be under­stood with a few super­fi­cial pieces of know­ledge… But the best advice is undoubtedly to get into the habit of com­par­ing your point of view and know­ledge with those of oth­er people, prefer­ably those who are know­ledge­able. This remains the best way to devel­op your thinking.

Interview by Anne Orliac
1Heav­en. (2025, juin). Baromètre Born AI 2025 : Les usages de l’IA générat­ive chez les 18–25 ans. Heav­en.https://​viuz​.com/​a​n​n​o​n​c​e​/​9​3​-​d​e​s​-​j​e​u​n​e​s​-​u​t​i​l​i​s​e​n​t​-​u​n​e​-​i​a​-​g​e​n​e​r​a​t​i​v​e​-​b​a​r​o​m​e​t​r​e​-​b​o​r​n​-​a​i​-​2025/
2Kosmyna, N., Haupt­mann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beres­nitzky, A. V., Braun­stein, I., & Maes, P. (2025, juin). Your Brain on Chat­G­PT: Accu­mu­la­tion of Cog­nit­ive Debt when Using an AI Assist­ant for Essay Writ­ing Task. arX­iv.
https://​arx​iv​.org/​a​b​s​/​2​5​0​6​.​08872
3Dergaa, I., Ben Saad, H., Glenn, J. M., Amamou, B., Ben Aissa, M., Guelmami, N., Fekih-Rom­d­hane, F., & Chamari, K. (2024). From tools to threats: A reflec­tion on the impact of arti­fi­cial-intel­li­gence chat­bots on cog­nit­ive health. Fron­ti­ers in Psy­cho­logy, 15. https://​doi​.org/​1​0​.​3​3​8​9​/​f​p​s​y​g​.​2​0​2​4​.​1​2​59845
4Doshi, A. R., & Haus­er, O. P. (2024). Gen­er­at­ive AI enhances indi­vidu­al cre­ativ­ity but reduces the col­lect­ive diversity of nov­el con­tent. Sci­ence Advances, 10(28). https://​doi​.org/​1​0​.​1​1​2​6​/​s​c​i​a​d​v​.​a​d​n5290
5Lee, H., Kim, S., Chen, J., Patel, R., & Wang, T. (2025, April 26–May 1). The impact of gen­er­at­ive AI on crit­ic­al think­ing: Self-repor­ted reduc­tions in cog­nit­ive effort and con­fid­ence effects from a sur­vey of know­ledge work­ers. In CHI Con­fer­ence on Human Factors in Com­put­ing Sys­tems (CHI ’25) (pp. 1–23). ACM. https://​doi​.org/​1​0​.​1​1​4​5​/​3​7​0​6​5​9​8​.​3​7​13778
6Ren, R., Agar­w­al, A., Mazeika, M., Menghini, C., Vacareanu, R., Kenst­ler, B., Yang, M., Bar­rass, I., Gatti, A., Yin, X., Trevino, E., Ger­al­nik, M., Khoja, A., Lee, D., Yue, S., & Hendrycks, D. (2025, mars). The MASK Bench­mark: Dis­en­tangling Hon­esty From Accur­acy in AI Sys­tems [Prépub­lic­a­tion]. arX­iv.
https://​arx​iv​.org/​a​b​s​/​2​5​0​3​.​03750
7Park, P. S., Hendrycks, D., Burns, K., & Stein­hardt, J. (2024). AI decep­tion: A sur­vey of examples, risks, and poten­tial solu­tions. Pat­terns, 5(5), 100988. https://​doi​.org/​1​0​.​1​0​1​6​/​j​.​p​a​t​t​e​r​.​2​0​2​4​.​1​00988
8Anthrop­ic. (2025, mai). Sys­tem Card: Claude Opus 4 & Claude Son­net 4 (Rap­port de sécur­ité). https://​www​-cdn​.anthrop​ic​.com/​4​2​6​3​b​9​4​0​c​a​b​b​5​4​6​a​a​0​e​3​2​8​3​f​3​5​b​6​8​6​f​4​f​3​b​2​f​f​4​7.pdf
9Green­blatt, R., Wang, J., Wang, R., & Gan­guli, D. (2024, décembre). Align­ment fak­ing in large lan­guage mod­els [Prépub­lic­a­tion]. arX­iv.https://​doi​.org/​1​0​.​4​8​5​5​0​/​a​r​X​i​v​.​2​4​1​2​.​14093
10Pan, X., Liu, Y., Li, Z., & Zhang, Y. (2024, décembre). Fron­ti­er AI sys­tems have sur­passed the self-rep­lic­at­ing red line [Prépub­lic­a­tion]. arX­iv. https://​doi​.org/​1​0​.​4​8​5​5​0​/​a​r​X​i​v​.​2​4​1​2​.​12140
11Amod­ei, D. (2025, avril). The urgency of inter­pretab­il­ity [Bil­let de blog]. https://​www​.dari​oamod​ei​.com/​p​o​s​t​/​t​h​e​-​u​r​g​e​n​c​y​-​o​f​-​i​n​t​e​r​p​r​e​t​a​b​ility
12LoisZéro. Logo – IA sécuritaire pour l’hu­man­ité – https://​lawzero​.org/fr

Support accurate information rooted in the scientific method.

Donate