Home / Chroniques / How is generative AI changing modern education?
Group of graduates walk through digital futuristic city. Illustration represents education innovation, future careers. Symbolizes e‑learning transformation artificial intelligence tech, progress.
π Digital π Society

How is generative AI changing modern education?

Michel Barabel_VF
Michel Barabel
Associate Professor at Sciences Po and Researcher at Université Paris-Est Créteil
Key takeaways
  • Released by OpenAI in 2023, GPT-4 achieved performance levels comparable to those of postgraduate students in logic, mathematics and academic writing exercises.
  • Language models are changing the way students use their reasoning and analytical skills, which can lead to a weakening of their brain plasticity and neural network densification.
  • When assisted by AI, knowledge acquisition can be weakened as soon as it systematically replaces intellectual effort: written work, independent work and assessment methods.
  • Many questions are being raised about the balance between algorithmic assistance and human cognitive effort, particularly given the rapid evolution of AI models.
  • Frameworks are beginning to emerge in which AI is neither absent nor central but integrated in a differentiated manner according to the cognitive objectives pursued.

The emer­gence of gen­er­at­ive arti­fi­cial intel­li­gence in edu­ca­tion sys­tems marks a tech­no­lo­gic­al turn­ing point. Unlike the e‑learning plat­forms and teach­ing aids of the 2000s and 2010s, large lan­guage mod­els now pro­duce struc­tured reas­on­ing, syn­thes­ise com­plex cor­pora and inter­act with learners in an adapt­ive man­ner. GPT‑4, released by OpenAI in 2023, has achieved per­form­ance levels com­par­able to those of gradu­ate stu­dents on logic, math­em­at­ics and aca­dem­ic writ­ing exer­cises1. Oth­er archi­tec­tures such as Anthrop­ic’s Claude and Meta’s LLaMA con­firm this dynam­ic while explor­ing strategies for gov­ern­ing gen­er­at­ive out­puts and redu­cing bias2. These advances offer new oppor­tun­it­ies to per­son­al­ise learn­ing paths, strengthen indi­vidu­al­ised tutor­ing and broaden access to edu­ca­tion­al resources. How­ever, they raise ques­tions about fre­quency of use (see Fig­ure 1) and the cog­nit­ive con­di­tions of AI-assisted learn­ing, as cer­tain aspects of know­ledge acquis­i­tion may be under­mined when this tech­no­logy sys­tem­at­ic­ally replaces intel­lec­tu­al effort3.

Fre­quency of AI use in a pro­fes­sion­al set­ting. Source: Min­istry of High­er Edu­ca­tion and Research4.

These issues dir­ectly affect key aspects of high­er edu­ca­tion, such as writ­ten work, inde­pend­ent study and assess­ment meth­ods. Fur­ther­more, these changes call for an approach that com­bines aca­dem­ic research and obser­va­tion of teach­ing practices.

Cognitive effort put to the test by generative models

Large lan­guage mod­els are chan­ging the way stu­dents use their reas­on­ing and ana­lyt­ic­al skills. GPT‑4, for example, per­forms at a level com­par­able to that of gradu­ate stu­dents on logic, math­em­at­ics and text syn­thes­is exer­cises5. This raises ques­tions about the bal­ance between algorithmic assist­ance and human cog­nit­ive effort. “The prob­lem in edu­ca­tion arises when stu­dents sys­tem­at­ic­ally use AI to write a present­a­tion, thes­is, dis­ser­ta­tion or even to learn a concept,” says Michel Bara­bel, a lec­turer at Uni­versité Par­is-Est whose work focuses on organ­isa­tion­al trans­form­a­tion, par­tic­u­larly skills man­age­ment, pro­fes­sion­al train­ing and learn­ing cul­tures. Sys­tem­at­ic out­sourcing can hinder the devel­op­ment of essen­tial cog­nit­ive abil­it­ies. “Research shows that in this case, the cost of cog­nit­ive effort increases rap­idly and doing things on your own becomes increas­ingly difficult.”

The the­or­et­ic­al frame­works of cog­nit­ive psy­cho­logy shed light on this phe­nomen­on. Daniel Kahne­man’s dis­tinc­tions between the Type 1 brain, which is ana­lyt­ic­al and lazy, and the Type 2 brain, which is cre­at­ive and engaged in solv­ing new situ­ations, serve as a ref­er­ence point. Bara­bel observes that “the great prom­ise of AI is pre­cisely to out­source simple, repet­it­ive Type 1 brain tasks in order to free up time for Type 2 activ­it­ies.” Its repeated use in fun­da­ment­al Type 1 tasks can thus lim­it the dens­i­fic­a­tion of the neur­al net­works neces­sary for cre­ativ­ity and crit­ic­al thinking.

A com­par­is­on with oth­er uses of tech­no­logy illus­trates this risk. “This depend­ence raises a cent­ral ques­tion – that of a gradu­al impov­er­ish­ment of cog­nit­ive abil­it­ies,” explains Bara­bel, refer­ring to examples observed in oth­er con­texts, such as Lon­don taxi drivers’ nav­ig­a­tion becom­ing depend­ent on GPS. Neur­os­cience research shows that learn­ing based on act­ive cog­nit­ive engage­ment pro­motes brain plas­ti­city and the dens­i­fic­a­tion of neur­al net­works, while a sus­tained reduc­tion in ana­lyt­ic­al effort lim­its these mech­an­isms and weak­ens the mobil­isa­tion of skills in com­plex situ­ations6. “How­ever, we know that it is very dif­fi­cult to bring out genu­ine cre­ativ­ity without this found­a­tion of pri­or cog­nit­ive effort,” adds the expert.

This reflec­tion on cog­nit­ive effort high­lights that the effects of gen­er­at­ive AI depend less on tech­no­logy than on how it is used. Thought­ful and reg­u­lated use can sup­port cre­ativ­ity and pro­ductiv­ity, while pass­ive use can com­prom­ise the devel­op­ment of essen­tial intel­lec­tu­al skills.

Educational resources undergoing gradual restructuring

The rap­id spread of gen­er­at­ive AI tools has brought to light uses that are already wide­spread among stu­dents, often out­side any insti­tu­tion­al frame­work. Sev­er­al stud­ies show that these tools are being used to write essays, struc­ture dis­ser­ta­tions and pre­pare assess­ments, without teach­ing meth­ods hav­ing been designed to take this into account7. This situ­ation has led some insti­tu­tions to rethink not only reg­u­la­tion, but the very nature of the activ­it­ies offered.

Accord­ing to Michel Bara­bel, “the issue can­not be reduced to a simple oppos­i­tion between author­isa­tion and pro­hib­i­tion. The prob­lem should not be framed in these terms.” He states that AI com­bines strong edu­ca­tion­al poten­tial with real lim­it­a­tions. In the sys­tems he observes, the chal­lenge is to dis­tin­guish between dif­fer­ent types of activ­it­ies depend­ing on the role assigned to the machine. “Some activ­it­ies can be entrus­ted entirely to AI without any par­tic­u­lar reg­u­la­tion,” par­tic­u­larly when it comes to pro­ced­ures or meth­od­o­lo­gic­al assist­ance, while oth­ers must remain exclus­ively human. “The use of AI is pro­hib­ited in order to pre­serve the edu­ca­tion­al rela­tion­ship, type 1 brain work, cre­ativ­ity and crit­ic­al thinking.”

This dif­fer­en­ti­ation is reflec­ted in real terms by trans­par­ency meas­ures. “At Sci­ences Po and IAE Par­is-Est, stu­dents must explain their use of AI in a ded­ic­ated appendix. We author­ise its use, but we require an AI appendix in all work,” explains Bara­bel. This require­ment shifts the focus of assess­ment from the res­ult alone to the intel­lec­tu­al pro­cess involved. “For us, the new form of pla­gi­ar­ism is not using AI, but using it without declar­ing it,” he adds.

Assess­ment meth­ods have also been adjus­ted to take these prac­tices into account. Bara­bel observes that “a present­a­tion due the fol­low­ing week is now very likely to be pro­duced 60, 70, or even 90 per cent with the help of AI.” In this con­text, “where­as writ­ten work used to carry more weight, oral work now pre­dom­in­ates in order to assess stu­dents’ com­pre­hen­sion, under­stand­ing of con­cepts and abil­ity to defend their reasoning.”

Between del­eg­a­tion, assist­ance and aug­ment­a­tion, edu­ca­tion­al applic­a­tions are struc­tured around a prin­ciple of artic­u­la­tion. “There are situ­ations where AI assists the stu­dent,” notes Bara­bel, while in oth­er con­fig­ur­a­tions, “the stu­dent pro­duces a first draft and AI helps them improve it through ques­tion­ing and sug­ges­tions.” These exper­i­ments are gradu­ally shap­ing a frame­work where this tech­no­logy is neither absent nor cent­ral but integ­rated in a dif­fer­en­ti­ated man­ner accord­ing to the cog­nit­ive object­ives pursued.

Between promises of improvement and risks of fragmentation

The cur­rent lim­it­a­tions of gen­er­at­ive AI in edu­ca­tion have less to do with its tech­nic­al per­form­ance than with the sys­tem­ic effects it has on learn­ing tra­ject­or­ies. The cog­nit­ive bene­fits observed depend heav­ily on the con­di­tions of use, with some­times con­tra­dict­ory res­ults depend­ing on the exper­i­ment­al pro­to­cols. Michel Bara­bel there­fore urges cau­tion in inter­pret­ing the ini­tial data avail­able. “Some stud­ies, not­ably from MIT, have shown a rap­id decline in cer­tain cog­nit­ive abil­it­ies, but on very small samples and with ques­tion­able pro­to­cols,” he points out, while not­ing that oth­er stud­ies high­light pos­it­ive effects on cer­tain skills.

This het­ero­gen­eity of res­ults raises a cent­ral ques­tion: the intens­ity and, above all, the nature of use. “It is not cer­tain that there is a uni­ver­sal threshold,” says Bara­bel, emphas­ising that “the prob­lem lies more in the organ­isa­tion of activ­it­ies than in the volume of inter­ac­tion with AI.” He pro­poses a the­or­et­ic­al bal­ance in which “100% human should rep­res­ent at least 20% of activ­it­ies” in order to pre­serve the fun­da­ment­al cog­nit­ive func­tions related to effort, inde­pend­ent think­ing and judge­ment building.

This reflec­tion on cog­nit­ive effort under­scores that the effects of gen­er­at­ive AI depend less on the tech­no­logy itself than on the ways in which it is used

Bey­ond indi­vidu­al effects, research­ers are con­cerned about inequal­ity dynam­ics that could be amp­li­fied by intel­li­gent sys­tems. Dif­fer­ences in access to the most power­ful ver­sions of the mod­els, often sub­ject to paid sub­scrip­tions, are a primary factor in dif­fer­en­ti­ation. Bara­bel high­lights the fact that “fam­il­ies’ eco­nom­ic cap­it­al could determ­ine access to more power­ful AI used in a private set­ting,” cre­at­ing a cumu­lat­ive advant­age for cer­tain stu­dents. Added to this are insti­tu­tion­al dis­par­it­ies. “Some insti­tu­tions have the fin­an­cial and edu­ca­tion­al resources to deploy AI, train teach­ers and imple­ment eth­ic­al charters. Oth­ers do not.”

These mater­i­al dif­fer­ences are com­poun­ded by cul­tur­al and cog­nit­ive inequal­it­ies. “The major risk is there­fore an increase in the gap between stu­dents,” points out Bara­bel, dis­tin­guish­ing between those who use AI stra­tegic­ally, those who use it mech­an­ic­ally, and those who rein­vest the time freed up in activ­it­ies with high cog­nit­ive value. Research in the soci­ology of edu­ca­tion shows that these mech­an­isms of dif­fer­en­ti­ated appro­pri­ation play a decis­ive role in the repro­duc­tion or trans­form­a­tion of school hier­arch­ies8.

Finally, the rap­id pace of tech­no­lo­gic­al change poses a struc­tur­al chal­lenge for edu­ca­tion­al insti­tu­tions. “What we say today may be called into ques­tion in a few months’ time by a new gen­er­a­tion of AI,” says Bara­bel. This tech­no­lo­gic­al instabil­ity raises ques­tions about the abil­ity of edu­ca­tion sys­tems to com­bine innov­a­tion, equity and the devel­op­ment of fun­da­ment­al human skills, in a con­text where the pro­duc­tion and trans­mis­sion of know­ledge are them­selves under­go­ing pro­found change.

Aicha Fall
1OpenAI, GPT‑4 Tech­nic­al Report, 2023https://cdn.openai.com/papers/gpt‑4.pdf
2Bubeck Sébas­tien, Chandrasekaran Var­un, Eldan Ron­en, Gehrke Johannes, Hor­vitz Eric, Kamar Ece, Lee Peter, Lee Yi‑Ting, Li Yin Tat, Lun­d­berg Scott, Nori Har­sha, Palangi Ham­id, Ribeiro Marco Tulio, Zhang Yi, Sparks of Arti­fi­cial Gen­er­al Intel­li­gence Early Exper­i­ments with GPT‑4, 2023 https://​arx​iv​.org/​p​d​f​/​2​3​0​3​.​12712
3UnescoGuide pour l’IA générat­ive dans l’éducation et la recher­che, 2023 https://​www​.unesco​.org/​f​r​/​d​i​g​i​t​a​l​-​e​d​u​c​a​t​i​o​n​/​a​i​-​f​u​t​u​r​e​-​l​e​a​r​n​i​n​g​/​g​u​i​dance
4Min­istère char­gé de l’enseignement supérieur et de la recher­che, IA et ensei­gne­ment supérieur : form­a­tion, struc­tur­a­tion et appro­pri­ation par la société, Juin 2025 https://www.enseignementsup-recherche.gouv.fr/sites/default/files/2025–07/rapport-intelligence-artificielle-et-enseignement-sup-rieur-formation-structuration-et-appropriation-par-la-soci‑t–37540.pdf
5OECD, OECD Digit­al Edu­ca­tion Out­look 2023: Towards an Effect­ive Digit­al Edu­ca­tion Eco­sys­tem, 2023 https://​www​.oecd​.org/​c​o​n​t​e​n​t​/​d​a​m​/​o​e​c​d​/​e​n​/​p​u​b​l​i​c​a​t​i​o​n​s​/​r​e​p​o​r​t​s​/​2​0​2​3​/​1​2​/​o​e​c​d​-​d​i​g​i​t​a​l​-​e​d​u​c​a​t​i​o​n​-​o​u​t​l​o​o​k​-​2​0​2​3​_​c​8​2​7​b​8​1​a​/​c​7​4​f​0​3​d​e​-​e​n.pdf
6Kolb, B., Gibb, R. (2011). Brain plas­ti­city and beha­viour in the devel­op­ing brain. Journ­al of the Cana­dian Academy of Child and Adoles­cent Psy­chi­atry, 20(4), 265–276. https://​pmc​.ncbi​.nlm​.nih​.gov/​a​r​t​i​c​l​e​s​/​P​M​C​3​2​2​2570/
7Usage des Intel­li­gences arti­fi­ci­elles générat­ives à l’université : regards croisés entre usagers et pro­fes­sion­nels des bib­lio­thèques uni­versitaires. Von Gar­rel et May­er et al., ana­lyse de don­nées d’enquête étudiant·es en Alle­magne, revue COSSI, 2025. https://​revue​-cossi​.numerev​.com/​p​d​f​/​a​r​t​i​c​l​e​s​/​r​e​v​u​e​-​1​3​/​3​8​1​4​-​u​s​a​g​e​-​d​e​s​-​i​n​t​e​l​l​i​g​e​n​c​e​s​-​a​r​t​i​f​i​c​i​e​l​l​e​s​-​g​e​n​e​r​a​t​i​v​e​s​-​a​-​l​-​u​n​i​v​e​r​s​i​t​e​-​r​e​g​a​r​d​s​-​c​r​o​i​s​e​s​-​e​n​t​r​e​-​u​s​a​g​e​r​s​-​e​t​-​p​r​o​f​e​s​s​i​o​n​n​e​l​s​-​d​e​s​-​b​i​b​l​i​o​t​h​e​q​u​e​s​-​u​n​i​v​e​r​s​i​t​aires
8UNESCO, L’intelligence arti­fi­ci­elle dans l’éducation https://​www​.unesco​.org/​f​r​/​d​i​g​i​t​a​l​-​e​d​u​c​a​t​i​o​n​/​a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​gence

Support accurate information rooted in the scientific method.

Donate