Home / Chroniques / Generative AI: the risk of cognitive atrophy
A person observes a glowing brain display on a screen
Généré par l'IA / Generated using AI
π Neuroscience π Digital

Generative AI: the risk of cognitive atrophy

ioan_roxin – copie
Ioan Roxin
Professor Emeritus at Marie et Louis Pasteur University
Key takeaways
  • Less than three years after the launch of ChatGPT, 42% of young French people already use generative AI on a daily basis.
  • Using ChatGPT to write an essay reduces the cognitive engagement and intellectual effort required to transform information into knowledge, according to a study.
  • The study also showed that 83% of AI users were unable to remember a passage they had just written for an essay.
  • Other studies show that individual gains can be significant when authors ask ChatGPT to improve their texts, but that the overall creativity of the group decreases.
  • Given these risks, it is important to always question the answers provided by text generators and to make a conscious effort to think about what we read, hear or believe.

Less than three years after the launch of Chat­G­PT, 42% of young French peo­ple already use gen­er­a­tive AI on a dai­ly basis1. How­ev­er, stud­ies are begin­ning to point to the neg­a­tive impact of these tech­nolo­gies on our cog­ni­tive abil­i­ties. Ioan Rox­in, pro­fes­sor emer­i­tus at Marie et Louis Pas­teur Uni­ver­si­ty and spe­cial­ist in infor­ma­tion tech­nol­o­gy, answers our questions.

You claim that the explosion in the use of LLM (Large Language Models, generative AI models including ChatGPT, Llama and Gemini) comes at a time when our relationship with knowledge has already been altered. Could you elaborate?

Ioan Rox­in. The wide­spread use of the inter­net and social media has already weak­ened our rela­tion­ship with knowl­edge. Of course, these tools have tremen­dous appli­ca­tions in terms of access to infor­ma­tion. But con­trary to what they claim, they are less about democ­ra­tis­ing knowl­edge than cre­at­ing a gen­er­alised illu­sion of knowl­edge. I don’t think it’s an exag­ger­a­tion to say that they are dri­ving intel­lec­tu­al, emo­tion­al and moral medi­oc­rity on a glob­al scale. Intel­lec­tu­al because they encour­age over­con­sump­tion of con­tent with­out any real crit­i­cal analy­sis; emo­tion­al because they cre­ate an ever-deep­en­ing depen­dence on stim­u­la­tion and enter­tain­ment; and moral because we have fall­en into pas­sive accep­tance of algo­rith­mic decisions.

Does this alteration in our relationship with knowledge have cognitive foundations?

Yes. Back in 2011, a study high­light­ed the “Google effect”: when we know that infor­ma­tion is avail­able online, we do not remem­ber it as well. How­ev­er, when we no longer train our mem­o­ry, the asso­ci­at­ed neur­al net­works atro­phy. It has also been proven that the inces­sant noti­fi­ca­tions, alerts and con­tent sug­ges­tions on which dig­i­tal tech­nolo­gies rely heav­i­ly sig­nif­i­cant­ly reduce our abil­i­ty to con­cen­trate and think. Reduced mem­o­ry, con­cen­tra­tion and ana­lyt­i­cal skills lead to dimin­ished cog­ni­tive process­es. I very much fear that the wide­spread use of gen­er­a­tive AI will not improve the situation.

What additional risks does this AI pose?

There are neu­ro­log­i­cal, psy­cho­log­i­cal and philo­soph­i­cal risks. From a neu­ro­log­i­cal stand­point, wide­spread use of this AI car­ries the risk of over­all cog­ni­tive atro­phy and loss of brain plas­tic­i­ty. For exam­ple, researchers at the Mass­a­chu­setts Insti­tute of Tech­nol­o­gy (MIT) con­duct­ed a four-month study2 involv­ing 54 par­tic­i­pants who were asked to write essays with­out assis­tance, with access to the inter­net via a search engine or with Chat­G­PT. Their neur­al activ­i­ty was mon­i­tored by EEG. The study, the results of which are still in preprint, found that using the inter­net, and even more so Chat­G­PT, sig­nif­i­cant­ly reduced cog­ni­tive engage­ment and “rel­e­vant cog­ni­tive load”, i.e. the intel­lec­tu­al effort required to trans­form infor­ma­tion into knowledge. 

More specif­i­cal­ly, par­tic­i­pants assist­ed by Chat­G­PT wrote 60% faster, but their rel­e­vant cog­ni­tive load fell by 32%. EEG showed that brain con­nec­tiv­i­ty was almost halved (alpha and theta waves) and 83% of AI users were unable to remem­ber a pas­sage they had just written. 

Oth­er stud­ies sug­gest a sim­i­lar trend: research3 con­duct­ed by Qatari, Tunisian and Ital­ian researchers indi­cates that heavy use of LLM car­ries the risk of cog­ni­tive decline. The neur­al net­works involved in struc­tur­ing thought, writ­ing texts, but also in trans­la­tion, cre­ative pro­duc­tion, etc. are com­plex and deep. Del­e­gat­ing men­tal effort to AI leads to a cumu­la­tive “cog­ni­tive debt”: the more automa­tion pro­gress­es, the less the pre­frontal cor­tex is used, sug­gest­ing last­ing effects beyond the imme­di­ate task.

What are the psychological risks?

Gen­er­a­tive AI have every­thing it takes to make us depen­dant on it: it express­es itself like humans, adapts to our behav­iour, seems to have all the answers, is fun to inter­act with, always keeps the con­ver­sa­tion going and is extreme­ly accom­mo­dat­ing towards us. How­ev­er, this depen­dence is harm­ful not only because it increas­es oth­er risks but in and of itself. It can lead to social iso­la­tion, reflex­ive dis­en­gage­ment (“if AI can answer all my ques­tions, why do I need to learn or think for myself?”) and even a deep sense of humil­i­a­tion when faced with this tool’s incred­i­ble effi­ca­cy. None of this gives a par­tic­u­lar­ly opti­mistic out­look for our men­tal health.

And from a philosophical point of view?

Gen­er­alised cog­ni­tive atro­phy is already a philo­soph­i­cal risk in itself… but there are oth­ers. If this type of tool is wide­ly used – and this is already the case with younger gen­er­a­tions – we are at risk of a stan­dard­i­s­a­tion of thought. Research4 car­ried out by British researchers showed that when authors asked Chat­G­PT to improve their work, the indi­vid­ual ben­e­fits could be great, but the over­all cre­ativ­i­ty of the group reduced. Anoth­er risk relates to our crit­i­cal thinking. 

One study5 car­ried out by Microsoft on 319 knowl­edge work­ers showed a sig­nif­i­cant neg­a­tive cor­re­la­tion (r=-0.49) between the fre­quen­cy with which AI tools were used and crit­i­cal think­ing scores (Bloom’s tax­on­o­my). The study con­clud­ed that is an increased ten­den­cy to offload men­tal effort as trust in the sys­tem exceeds trust in our own abil­i­ties. How­ev­er, it is essen­tial to main­tain a crit­i­cal mind­set as AI can not only make mis­takes or per­pet­u­ate bias­es but also con­ceal infor­ma­tion or sim­u­late compliance.

How does this work?

A vast major­i­ty are sole­ly con­nec­tion­ist AI, which rely on arti­fi­cial neur­al net­works trained using phe­nom­e­nal amounts of data. They learn to gen­er­ate plau­si­ble answers to all our ques­tions through sta­tis­ti­cal and prob­a­bilis­tic pro­cess­ing. Their per­for­mance has improved con­sid­er­ably with the intro­duc­tion of Google’s “Trans­former” tech­nol­o­gy in 2017. Thanks to this tech­nol­o­gy, AI can analyse all the words in a text in par­al­lel and weigh their impor­tance for mean­ing, which allows for greater sub­tle­ty in responses. 

But the back­ground remains prob­a­bilis­tic: while their answers always seem con­vinc­ing and log­i­cal, they can be com­plete­ly wrong. In 2023, users had fun ask­ing Chat­G­PT about cow eggs: the AI dis­cussed the ques­tion at length with­out ever answer­ing that they did not exist. This error has since been cor­rect­ed through rein­force­ment learn­ing with human feed­back, but it illus­trates well how these tools work.

Could this be improved?

Some com­pa­nies are start­ing to com­bine con­nec­tion­ist AI, which learns every­thing from scratch, with old­er tech­nol­o­gy, sym­bol­ic AI, in which rules to fol­low and basic knowl­edge are explic­it­ly pro­grammed. It seems to me that the future lies in neu­ro-sym­bol­ic AI. This hybridi­s­a­tion not only improves the reli­a­bil­i­ty of respons­es but also reduces the ener­gy and finan­cial costs of training.

You also mentioned “biases” that could be associated with philosophical risks?

Yes. There are two types. The first can be delib­er­ate­ly intro­duced by the AI cre­ator. LLMs are trained on all kinds of unfil­tered con­tent avail­able online (an esti­mat­ed 4 tril­lion words for ChatGPT4, com­pared to the 5 bil­lion words con­tained in the Eng­lish ver­sion of Wikipedia!). Pre-train­ing cre­ates a “mon­ster” that can gen­er­ate all kinds of horrors.

A sec­ond step (called super­vised fine-tun­ing) is there­fore nec­es­sary: it con­fronts the pre-trained AI with val­i­dat­ed data, which serves as a ref­er­ence. This oper­a­tion enables, for exam­ple, “teach­ing” it to avoid dis­crim­i­na­tion, but can also be used to guide its respons­es for ide­o­log­i­cal pur­pos­es. A few weeks after its launch, DeepSeek made head­lines for its eva­sive respons­es to user ques­tions about Tianan­men Square and Tai­wanese inde­pen­dence. It is impor­tant to remem­ber that con­tent gen­er­a­tors of this type may not be neu­tral. Blind­ly trust­ing them can lead to the spread of ide­o­log­i­cal­ly biased theories.

What about secondary biases?

These bias­es appear spon­ta­neous­ly, often with­out a clear expla­na­tion. Lan­guage mod­els (LLMs) have “emer­gent” prop­er­ties that were not antic­i­pat­ed by their design­ers. Some are remark­able: these text gen­er­a­tors write flaw­less­ly and have become excel­lent trans­la­tors with­out any gram­mar rules being cod­ed in. But oth­ers are cause for con­cern. The MASK6 bench­mark (Mod­el Align­ment between State­ments and Knowl­edge), pub­lished in March 2025, shows that, among the thir­ty mod­els test­ed, none achieved more than 46% hon­esty, and that the propen­si­ty to lie increas­es with the size of the mod­el, even if their fac­tu­al accu­ra­cy improves. 

It seems to me that the future lies in neu­ro-sym­bol­ic AI

MASK proves that LLMs “know how to lie” when con­flict­ing objec­tives (e.g., charm­ing a jour­nal­ist, respond­ing to com­mer­cial or hier­ar­chi­cal pres­sures) pre­dom­i­nate. In some tests, AI delib­er­ate­ly lied7, threat­ened users8, cir­cum­vent­ed eth­i­cal super­vi­sion rules9 and even repro­duced autonomous­ly to ensure its sur­vival10.

These behav­iours, for which the deci­sion-mak­ing mech­a­nisms remain opaque, are not pos­si­ble to pre­cise­ly con­trol. These capa­bil­i­ties emerge from the train­ing process itself: it is a form of algo­rith­mic self-organ­i­sa­tion, not a design flaw. Gen­er­a­tive AI devel­ops rather than being designed, with its inter­nal log­ic form­ing in a self-organ­ised man­ner, with­out a blue­print. These devel­op­ments are suf­fi­cient­ly wor­ry­ing that lead­ing fig­ures such as Dario Amod­ei11 (CEO of Anthrop­ic), Yoshua Ben­gio12 (founder of Mila), Sam Alt­man (cre­ator of Chat­G­PT) and Geof­frey Hin­ton (win­ner of the Nobel Prize in Physics in 2024), are call­ing for strict reg­u­la­tion to favour AI that is more trans­par­ent, eth­i­cal and aligned with human val­ues, includ­ing a slow­down in the devel­op­ment of these technologies.

Does this mean that these AIs are intelligent and have a will of their own?

No. The flu­id­i­ty of their con­ver­sa­tion and these emerg­ing prop­er­ties can give the illu­sion of intel­li­gence at work.

But no AI is intel­li­gent in the human sense of the word. They have no con­scious­ness or will, and do not real­ly under­stand the con­tent they are han­dling. Their func­tion­ing is pure­ly sta­tis­ti­cal and prob­a­bilis­tic, and these devi­a­tions only emerge because they seek to respond to ini­tial com­mands. It is not so much their self-aware­ness as the opac­i­ty of their func­tion­ing that wor­ries researchers.

Can we not protect ourselves from all the risks you have mentioned?

Yes, but this requires both active­ly engag­ing our crit­i­cal think­ing and con­tin­u­ing to exer­cise our neur­al path­ways. AI can be a tremen­dous lever for intel­li­gence and cre­ativ­i­ty, but only if we remain capa­ble of think­ing, writ­ing and cre­at­ing with­out it.

How can we train our critical thinking when faced with AI responses?

By apply­ing a sys­tem­at­ic rule: always ques­tion the answers giv­en by text gen­er­a­tors and make a con­scious effort to think care­ful­ly about what we read, hear or believe. We must also accept that real­i­ty is com­plex and can­not be under­stood with a few super­fi­cial pieces of knowl­edge… But the best advice is undoubt­ed­ly to get into the habit of com­par­ing your point of view and knowl­edge with those of oth­er peo­ple, prefer­ably those who are knowl­edge­able. This remains the best way to devel­op your thinking.

Interview by Anne Orliac
1Heav­en. (2025, juin). Baromètre Born AI 2025 : Les usages de l’IA généra­tive chez les 18–25 ans. Heav­en.https://​viuz​.com/​a​n​n​o​n​c​e​/​9​3​-​d​e​s​-​j​e​u​n​e​s​-​u​t​i​l​i​s​e​n​t​-​u​n​e​-​i​a​-​g​e​n​e​r​a​t​i​v​e​-​b​a​r​o​m​e​t​r​e​-​b​o​r​n​-​a​i​-​2025/
2Kos­my­na, N., Haupt­mann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beres­nitzky, A. V., Braun­stein, I., & Maes, P. (2025, juin). Your Brain on Chat­G­PT: Accu­mu­la­tion of Cog­ni­tive Debt when Using an AI Assis­tant for Essay Writ­ing Task. arX­iv.
https://​arx​iv​.org/​a​b​s​/​2​5​0​6​.​08872
3Der­gaa, I., Ben Saad, H., Glenn, J. M., Amamou, B., Ben Ais­sa, M., Guel­ma­mi, N., Fek­ih-Romd­hane, F., & Chamari, K. (2024). From tools to threats: A reflec­tion on the impact of arti­fi­cial-intel­li­gence chat­bots on cog­ni­tive health. Fron­tiers in Psy­chol­o­gy, 15. https://​doi​.org/​1​0​.​3​3​8​9​/​f​p​s​y​g​.​2​0​2​4​.​1​2​59845
4Doshi, A. R., & Hauser, O. P. (2024). Gen­er­a­tive AI enhances indi­vid­ual cre­ativ­i­ty but reduces the col­lec­tive diver­si­ty of nov­el con­tent. Sci­ence Advances, 10(28). https://​doi​.org/​1​0​.​1​1​2​6​/​s​c​i​a​d​v​.​a​d​n5290
5Lee, H., Kim, S., Chen, J., Patel, R., & Wang, T. (2025, April 26–May 1). The impact of gen­er­a­tive AI on crit­i­cal think­ing: Self-report­ed reduc­tions in cog­ni­tive effort and con­fi­dence effects from a sur­vey of knowl­edge work­ers. In CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems (CHI ’25) (pp. 1–23). ACM. https://​doi​.org/​1​0​.​1​1​4​5​/​3​7​0​6​5​9​8​.​3​7​13778
6Ren, R., Agar­w­al, A., Mazei­ka, M., Mengh­i­ni, C., Vacare­anu, R., Ken­stler, B., Yang, M., Bar­rass, I., Gat­ti, A., Yin, X., Trevi­no, E., Ger­al­nik, M., Kho­ja, A., Lee, D., Yue, S., & Hendrycks, D. (2025, mars). The MASK Bench­mark: Dis­en­tan­gling Hon­esty From Accu­ra­cy in AI Sys­tems [Prépub­li­ca­tion]. arX­iv.
https://​arx​iv​.org/​a​b​s​/​2​5​0​3​.​03750
7Park, P. S., Hendrycks, D., Burns, K., & Stein­hardt, J. (2024). AI decep­tion: A sur­vey of exam­ples, risks, and poten­tial solu­tions. Pat­terns, 5(5), 100988. https://​doi​.org/​1​0​.​1​0​1​6​/​j​.​p​a​t​t​e​r​.​2​0​2​4​.​1​00988
8Anthrop­ic. (2025, mai). Sys­tem Card: Claude Opus 4 & Claude Son­net 4 (Rap­port de sécu­rité). https://​www​-cdn​.anthrop​ic​.com/​4​2​6​3​b​9​4​0​c​a​b​b​5​4​6​a​a​0​e​3​2​8​3​f​3​5​b​6​8​6​f​4​f​3​b​2​f​f​4​7.pdf
9Green­blatt, R., Wang, J., Wang, R., & Gan­guli, D. (2024, décem­bre). Align­ment fak­ing in large lan­guage mod­els [Prépub­li­ca­tion]. arX­iv.https://​doi​.org/​1​0​.​4​8​5​5​0​/​a​r​X​i​v​.​2​4​1​2​.​14093
10Pan, X., Liu, Y., Li, Z., & Zhang, Y. (2024, décem­bre). Fron­tier AI sys­tems have sur­passed the self-repli­cat­ing red line [Prépub­li­ca­tion]. arX­iv. https://​doi​.org/​1​0​.​4​8​5​5​0​/​a​r​X​i​v​.​2​4​1​2​.​12140
11Amod­ei, D. (2025, avril). The urgency of inter­pretabil­i­ty [Bil­let de blog]. https://​www​.dar​ioamod​ei​.com/​p​o​s​t​/​t​h​e​-​u​r​g​e​n​c​y​-​o​f​-​i​n​t​e​r​p​r​e​t​a​b​ility
12LoisZéro. Logo – IA sécu­ri­taire pour l’hu­man­ité – https://​lawze​ro​.org/fr

Our world explained with science. Every week, in your inbox.

Get the newsletter