Home / Chroniques / AI and productivity: rethinking workforce training and social cohesion
Généré par l'IA / Generated using AI
π Digital π Economics

AI and productivity: rethinking workforce training and social cohesion

Eric Hazan_VF
Eric Hazan
Professor of Digital Strategy at Sciences Po, Co-Founder of Ardabelle Capital, and Senior Partner Emeritus at McKinsey & Company
Philippe Tibi_VF
Philippe Tibi
Professor of Strategy and Finance at Ecole Polytechnique (IP Paris) and Founder of Pergamon Campus
Key takeaways
  • AI has the potential to improve productivity without reducing workload: the actual gains are reinvested in increased activity rather than in freed-up time.
  • The same technology can either enhance human capabilities or replace them, depending on how it is deployed.
  • Young graduates are paying a heavy price as junior roles are the first to disappear, severing the traditional chain of transmission of professional expertise.
  • A Marshall Plan for training is urgently needed. Curriculum reform, mass retraining and investment in human capital on a par with AI infrastructure.
  • Like the “Engels pause”, AI could concentrate its benefits among the few — unless we invent new forms of solidarity and deliberately direct progress towards workers.

Rarely has a tech­no­logy sim­ul­tan­eously sparked so much enthu­si­asm and so much anxi­ety at the same time. On the one hand, prom­ise of pro­ductiv­ity gains unseen since the 1970s; on the oth­er, the spectre of dwind­ling job pro­spects for an entire gen­er­a­tion of young gradu­ates. But these two real­it­ies are not con­tra­dict­ory. Togeth­er, they con­trib­ute to a fun­da­ment­al con­tem­por­ary equa­tion: AI is an amplifier—of value for those who pre­pare for it, of inequal­ity for those left behind. To fully grasp this chal­lenge, we must fol­low the thread that runs from pro­ductiv­ity to employ­ment, from employ­ment to busi­ness organ­isa­tion, and then from organ­isa­tion to a col­lect­ive ambi­tion for train­ing that matches the scale of the chal­lenge – the jour­ney this art­icle sets out to take.

Productivity: a promise beginning to materialise

Let’s start with some good news: signs of a real pro­ductiv­ity surge are begin­ning to mount. In Janu­ary 2026, eco­nom­ists at Apollo Glob­al Man­age­ment, led by Tor­sten Slok, iden­ti­fied tan­gible increases in DevOps, pro­cess auto­ma­tion and sup­port func­tions1. The most com­pre­hens­ive sum­mary avail­able to date is the Early Sig­nals of AI Impact dash­board, which aggreg­ates 303 sources in real time — aca­dem­ic stud­ies, sec­tor reports, field data — across 17 indic­at­ors of AI-driv­en labour trans­form­a­tion: indi­vidu­al pro­ductiv­ity, recruit­ment trends, shifts in required skills, team restruc­tur­ing. Its con­sensus assess­ment is clear: the adop­tion of AI tools is accel­er­at­ing across all indus­tries, pro­ductiv­ity meas­ured in pion­eer­ing com­pan­ies is rising, and roles involving repet­it­ive tasks are being reduced. This cru­cial nuance can be summed up in a single word: jobs are chan­ging faster than they are dis­ap­pear­ing, and no aggreg­ate mac­roe­co­nom­ic shift is yet meas­ur­able. It is pre­cisely this ‘yet’ that lends the issue of pre­pared­ness its sense of urgency2. But AI does more than just replace tasks: it trans­forms the very pace of innov­a­tion. Bontadini, Haskel and their co-authors describe it as a ‘meta-innov­a­tion’ — an innov­a­tion in the meth­od of innov­a­tion — with mul­ti­pli­er effects on total factor pro­ductiv­ity that the US eco­nomy is only just begin­ning to feel, whilst Europe is lag­ging behind in adop­tion to a wor­ry­ing extent3.

AI does more than just replace tasks: it trans­forms the very pace of innovation

Nev­er­the­less, two caveats are in order. The first relates to Moravec’s para­dox: AI excels at cog­nit­ively com­plex tasks — writ­ing, sum­mar­ising, cod­ing — but remains clumsy at the con­tex­tu­al manu­al tasks that seem easy to us. This dis­crep­ancy partly explains the gap between the touted poten­tial and actu­al adop­tion in the eco­nomy4. The second caveat is even more counter-intu­it­ive: accord­ing to a Har­vard Busi­ness Review study pub­lished in Feb­ru­ary 2026, AI does not light­en the work­load — it intens­i­fies it. Employ­ees who adopt AI tools handle more cases, respond to more requests and take on new tasks. The pro­ductiv­ity gain is real, but it does not auto­mat­ic­ally trans­late into freed-up time: it is rein­ves­ted in addi­tion­al work5.

Anoth­er key find­ing from the research by the NBER and MIT is that AI can boost work­ers’ pro­ductiv­ity without neces­sar­ily auto­mat­ing entire tasks. This dis­tinc­tion between aug­ment­a­tion and auto­ma­tion is not merely semant­ic — it determ­ines the full range of the technology’s social impacts. In aug­ment­a­tion mode, AI becomes a cog­nit­ive pros­thes­is: it extends the individual’s cap­ab­il­it­ies, enabling them to handle great­er com­plex­ity, pro­duce faster, and bet­ter doc­u­ment their decisions. Jobs are pre­served and often enhanced. In auto­ma­tion mode, it replaces human activ­ity itself: jobs may dis­ap­pear or under­go rad­ic­al trans­form­a­tion. Both modes coex­ist in most organ­isa­tions — some­times with­in the same role, depend­ing on the tasks and the cir­cum­stances6.

Acemo­glu, Autor and John­son refine this frame­work fur­ther by dis­tin­guish­ing five cat­egor­ies of tech­no­logy, two of which deserve par­tic­u­lar atten­tion. ‘Expert­ise-lev­el­ling’ tech­no­lo­gies enable non-experts to per­form tasks pre­vi­ously reserved for spe­cial­ists — med­ic­al AI for dia­gnost­ic sup­port is one example. ‘New task cre­ation’ tech­no­lo­gies, on the oth­er hand, are the only ones that are ‘unam­bigu­ously favour­able to work­ers’: they gen­er­ate a net demand for human expert­ise by open­ing up activ­it­ies that did not pre­vi­ously exist. It is in this lat­ter cat­egory that the most sol­id prom­ise of AI lies — though it is also the most demand­ing in terms of train­ing7.

Fig­ure 1: Five types of tech­no­logy and their impacts on work (Acemo­glu, Autor, John­son 2026)

The impact on jobs: who are the canaries in the coal mine?

The meta­phor chosen by Erik Bryn­jolfs­son for his sum­mer 2025 study is strik­ing: in coal mines, canar­ies were sent in to detect tox­ic gases before the miners entered. Young work­ers in sec­tors exposed to AI are, today, those canar­ies8.

The ADP data — the most gran­u­lar ever used for this type of ana­lys­is — speaks for itself: in the United States, between late 2022 and July 2025, employ­ment among 22- to 25-year-olds in roles highly exposed to AI fell by 6%. For young soft­ware developers, the decline reached 20% com­pared to the peak in Novem­ber 2022. Entry-level cus­tom­er sup­port roles also lost nearly 11% of their juni­or staff. It is the entry-level pos­i­tions that are dis­ap­pear­ing first9.

Fig­ure 2: Canar­ies in the coal mine: trends in juni­or employ­ment, 2022–2025 (United States)

This assess­ment echoes the find­ings of the Fin­an­cial Times’ invest­ig­a­tion into the ‘great gradu­ate jobs drought’: entire cohorts are gradu­at­ing from top uni­ver­sit­ies and strug­gling to find roles com­men­sur­ate with their qual­i­fic­a­tions, not because AI has replaced them, but because it has rendered the found­a­tion­al learn­ing exper­i­ence provided by juni­or roles redund­ant10. The con­sequence is per­verse and deserves our atten­tion: if young people no longer acquire their expert­ise on the job in these first roles, the chain of trans­mis­sion of pro­fes­sion­al expert­ise is broken. AI can amp­li­fy expert­ise — but it can­not cre­ate it from scratch. If the gen­er­a­tion enter­ing the labour mar­ket today does not build its found­a­tions, who will train the experts of 2035?

How­ever, we must not con­fuse expos­ure with net destruc­tion — and indeed ques­tion the caus­al­ity. The author him­self points out that the slow­down in hir­ing in jobs exposed to AI began in spring 2022, i.e. before the release of Chat­G­PT in Novem­ber 2022; the Fed’s rise in key interest rates is at least as plaus­ible an explan­a­tion11. Even more sur­pris­ing, Yale’s Budget Lab observes no sig­ni­fic­ant dif­fer­ence between employ­ment levels in the jobs most exposed to AI and those least exposed — “the lines look the same”, in the words of its chair. What the expos­ure met­rics do cap­ture with cer­tainty is a decline in job vacan­cies in the 40% of occu­pa­tions most exposed since the rise of Chat­G­PT — which is not equi­val­ent to net job losses. The eco­nomy as a whole con­tin­ues to cre­ate jobs; it is their struc­ture that is chan­ging, and with it the nature of the skills required12.

In France, the pic­ture is sim­il­ar. Pro­jec­tions for 2026–2030 identi­fy sev­er­al socio-pro­fes­sion­al cat­egor­ies that are par­tic­u­larly vul­ner­able: inter­me­di­ate admin­is­trat­ive pro­fes­sions, cer­tain sup­port func­tion man­agers, and skilled office work­ers. France is not immune due to its social mod­el — it is exposed, like its part­ners, to an accel­er­ated restruc­tur­ing of its labour mar­ket13.

Fig­ure 3: Expos­ure to AI by socio-pro­fes­sion­al cat­egory in France — scores syn­thes­ised from France Straté­gie, OECD, INSEE and WEF (2025–2030)

Dav­id Autor offers a more nuanced and, ulti­mately, more optim­ist­ic per­spect­ive: AI could restore value to skilled labour by amp­li­fy­ing what humans do best — judge­ment, rela­tion­ships and con­tex­tu­al cre­ativ­ity. It could even be the cata­lyst for a reviv­al of skilled middle-class jobs, provided that the tech­no­logy is geared towards aug­ment­ing human cap­ab­il­it­ies rather than simply repla­cing them. But this pro­spect requires act­ive policy meas­ures. It will not mater­i­al­ise spon­tan­eously14.

Businesses and AI: between hesitant adoption and profound transformation

While the mac­roe­co­nom­ic effects remain to be con­sol­id­ated, the effects of AI on the intern­al organ­isa­tion of busi­nesses are already notice­able. The first les­son from the most rig­or­ous empir­ic­al stud­ies is that res­ults depend far less on the tech­no­logy itself than on the organ­isa­tion­al choices that accom­pany it. Two com­pan­ies equipped with the same tools can achieve rad­ic­ally dif­fer­ent res­ults depend­ing on wheth­er they have redesigned their pro­cesses, trained their teams and redefined roles — or not15.

This ten­sion between auto­ma­tion and aug­ment­a­tion lies at the heart of the Bey­ond the Bin­ary report: the two dynam­ics coex­ist and com­bine. The same tech­no­logy can auto­mate routine tasks whilst enhan­cing the capa­city to focus on high-value-added tasks. Empir­ic­ally, three adop­tion pro­files have been iden­ti­fied: ‘cyborgs’, who seam­lessly integ­rate AI into their daily work­flow; ‘cen­taurs’, who altern­ate between work­ing autonom­ously, and del­eg­at­ing to AI depend­ing on the nature of the tasks; and ‘self-auto­mat­ors’, who pro­gramme AI to take over entire por­tions of their work. These three modes do not have the same implic­a­tions for the skills required the risks of obsol­es­cence or train­ing needs. The ques­tion is there­fore not “does AI replace?” but “how is it deployed, by whom, for what pur­poses, and with what safe­guards?”16

Com­pan­ies which have suc­cess­fully deployed AI that enhances their employ­ees’ skills have done so delib­er­ately, refus­ing to let the logic of auto­ma­tion alone dic­tate archi­tec­tur­al choices

One cat­egory under par­tic­u­lar scru­tiny is that of middle man­agers. AI brings to light hier­arch­ic­al lay­ers whose added value—information shar­ing, coordin­a­tion, reporting—is pre­cisely what auto­mated sys­tems can now handle17. But the mana­geri­al role is not lim­ited to the flow of inform­a­tion: it includes tal­ent devel­op­ment, con­flict man­age­ment, con­tex­tu­al inter­pret­a­tion, and trust in teams—skills that the McKin­sey Glob­al Insti­tute iden­ti­fies as the big win­ners of the AI era, for which demand is grow­ing pre­cisely because LLMs can­not rep­lic­ate them. The man­ager of tomor­row is not the one who fil­ters inform­a­tion — it is the one who cre­ates mean­ing18.

Finally — and this is per­haps the most import­ant les­son — AI that bene­fits work­ers is not built by default. Research from MIT Sloan shows that com­pan­ies which have suc­cess­fully deployed AI that enhances their employ­ees’ skills have done so delib­er­ately, refus­ing to let the logic of auto­ma­tion alone dic­tate archi­tec­tur­al choices. This is not an abstract mor­al imper­at­ive: it is a pre­requis­ite for sus­tain­able per­form­ance, in a con­text where tal­ent reten­tion and intern­al trust are becom­ing decis­ive com­pet­it­ive advant­ages19. Acemo­glu, Autor and John­son go fur­ther: it is not merely a mat­ter of cor­por­ate choice, but a doc­u­mented mar­ket fail­ure. Three cumu­lat­ive factors are struc­tur­ally driv­ing the industry towards auto­ma­tion: path depend­ence (large firms have built their busi­ness mod­els on auto­ma­tion tools from which they derive their prof­it­ab­il­ity); the dom­in­ant ideo­logy in research labor­at­or­ies, which is ori­ented towards AGI and largely insens­it­ive to labour effects; and the grow­ing con­cen­tra­tion of the sec­tor, which stifles altern­at­ive mod­els. “We are not cur­rently on the pro-work­er AI path,” states Simon John­son. This fail­ure calls for responses that go bey­ond the good­will of com­pan­ies20.

For a Marshall Plan for training

If AI is redu­cing entry-level roles, it is under­min­ing the main mech­an­ism through which com­pan­ies have always passed on their human cap­it­al: on-the-job learn­ing. All types of human-AI inter­ac­tion — from ‘cyborgs’ to ‘cen­taurs’ and ‘self-auto­mat­ors’ — require pri­or expert­ise that AI serves to amp­li­fy. But this expert­ise is built up in the early years of a pro­fes­sion­al career, in juni­or roles which are pre­cisely the ones that are dis­ap­pear­ing. The sys­tem­ic risk is real: by short-cir­cuit­ing early-career learn­ing, AI risks cre­at­ing a gen­er­a­tion of pro­fes­sion­als without sol­id found­a­tions — build­ings without a ground floor21.

This assess­ment calls for a response on the scale of the Mar­shall Plan. The ana­logy is not merely rhet­or­ic­al: the 1948 plan was a col­lect­ive and struc­tured response to the massive destruc­tion of phys­ic­al cap­it­al. AI is caus­ing the accel­er­ated destruc­tion of human cap­it­al — not through viol­ence, but through obsol­es­cence. The Oxford research on tech­no­lo­gic­al unem­ploy­ment serves as a use­ful remind­er that every major wave of auto­ma­tion has brought about unex­pec­ted adapt­a­tions; yet no pre­vi­ous wave has had this pace or this cross-cut­ting impact across the entire spec­trum of cog­nit­ive func­tions, from the bot­tom of the lad­der to the most highly skilled pro­fes­sions22.

Skills have an increas­ingly short lifespan and con­tinu­ing pro­fes­sion­al devel­op­ment sys­tems were designed for a world where obsol­es­cence was meas­ured in dec­ades, not years

Three areas of work are required in par­al­lel. The first is cur­riculum reform: from primary edu­ca­tion to the grandes écoles, we must integ­rate not only skills for inter­act­ing with AI, but above all the abil­it­ies that LLMs can­not rep­lic­ate — crit­ic­al judge­ment, reas­on­ing, con­tex­tu­al cre­ativ­ity, and rela­tion­al intel­li­gence. Research on K‑12 skills in the AI era shows that this reform is feas­ible on a large scale if we com­mit the neces­sary resources to pri­or­it­ising it23.

The second pri­or­ity is a massive retrain­ing pro­gramme, tar­get­ing the 40% of occu­pa­tions most at risk. Gran­u­lar data from Yale’s Budget Lab enables these inter­ven­tions to be pri­or­it­ised with unpre­ced­en­ted pre­ci­sion. The chal­lenge is to devel­op short, cer­ti­fied path­ways access­ible to those cur­rently in employ­ment, with tri­part­ite gov­ernance involving the state, pro­fes­sion­al bod­ies and busi­nesses. Speed is cru­cial here: skills have an increas­ingly short lifespan and con­tinu­ing pro­fes­sion­al devel­op­ment sys­tems were designed for a world where obsol­es­cence was meas­ured in dec­ades, not years.

The third area of focus is fin­an­cial. Com­pan­ies are invest­ing hun­dreds of bil­lions in AI infra­struc­ture — data centres, chips, found­a­tion­al mod­els. A com­par­able invest­ment in human cap­it­al is not merely a mor­al oblig­a­tion: it is the pre­requis­ite for the model’s social sus­tain­ab­il­ity. The eco­nom­ic scen­ari­os for trans­form­at­ive AI developed in Feb­ru­ary 2026 in Lon­don, as part of a for­ward-look­ing exer­cise bring­ing togeth­er eco­nom­ists and poli­cy­makers around the Wind­fall Trust and Gre­sham Col­lege, illus­trate this pre­cisely: in all tra­ject­or­ies where pro­ductiv­ity gains are widely dis­trib­uted, invest­ment in train­ing has con­sist­ently pre­ceded — or accom­pan­ied — tech­no­lo­gic­al deploy­ment. The oppos­ite is equally true: where train­ing has been deferred, the bene­fits of AI have been con­cen­trated among a few play­ers, widen­ing inequal­it­ies rather than redu­cing them24.

Fig­ure 4: Two tra­ject­or­ies: early train­ing vs deferred train­ing, 2025–2035 (UK, prospective)

These three areas of focus — cur­ricula, reskilling, and fund­ing — are neces­sary. But they are not enough unless the root cause is addressed: the dir­ec­tion of AI itself. Acemo­glu, Autor and John­son identi­fy three com­ple­ment­ary policy levers. The first is the applic­a­tion of com­pet­i­tion law to break up con­cen­tra­tion in the sec­tor and open the mar­ket to new entrants with busi­ness mod­els less focused on pure auto­ma­tion. The second is the leg­al pro­tec­tion of pro­fes­sion­al expert­ise: work­ers whose know-how is absorbed by AI sys­tems should have rights over this con­tri­bu­tion, as a safe­guard against ‘theft of expert­ise’25. The third is the insti­tu­tion­al­isa­tion of work­ers’ voices in deploy­ment decisions — at both com­pany level and in sec­tor­al reg­u­la­tion. Train­ing humans to adapt to AI is essen­tial; ori­ent­ing AI towards humans is just as urgent26.

In the face of social upheaval, creating new forms of solidarity

This train­ing plan is there­fore essen­tial for those whose skills are threatened by arti­fi­cial intel­li­gence. How­ever, his­tory teaches us that indus­tri­al revolu­tions dis­rupt tra­di­tion­al social frame­works. When they are massive, pro­ductiv­ity gains are not merely a stat­ist­ic­al fact; they her­ald a pro­found struc­tur­al transformation.

In the 19th cen­tury, the har­ness­ing of mech­an­ic­al and thermal power bru­tally drove peas­ants from their fields and dis­placed artis­ans in favour of factor­ies oper­ated by low-skilled work­ers. This pro­cess cre­ated an urb­an pro­let­ari­at plunged into poverty by the glut of labour sup­ply. Des­pite a spec­tac­u­lar rise in GDP, work­ers’ liv­ing con­di­tions stag­nated for nearly half a cen­tury. This is the ‘Engels’ pause’27, the peri­od dur­ing which the indus­tri­al elite cap­tured the bulk of the wealth gen­er­ated by pro­ductiv­ity gains. This upheav­al was also char­ac­ter­ised by a dev­ast­at­ing loss of social bear­ings: forced dis­place­ment, the break­down of fam­ily solid­ar­ity, the erosion of tra­di­tion­al work­place hier­arch­ies, and sub­sist­ence wages. The eco­nomy was thus ‘dis­embed­ded’ from soci­ety, to use Polanyi’s term28. The intel­lec­tu­al and polit­ic­al response was vig­or­ous: from Karl Marx’s Cap­it­al (1867) to the form­a­tion of revolu­tion­ary and reform­ist move­ments, a bal­ance of power developed between cap­it­al and labour, gradu­ally lead­ing to mod­ern social secur­ity, without the ‘great revolu­tion’ that had been predicted.

This his­tor­ic­al over­view is, of course, sim­pli­fied. The slow con­struc­tion of the social con­tract was neither nat­ur­al nor peace­ful. It took place under the bru­tal pres­sure of polit­ic­al and geo­pol­it­ic­al upheavals: colo­ni­al expan­sion, the clash of imper­i­al­isms, the Soviet revolu­tion and the world wars. The social con­tract also long relied on the emer­gence of altern­at­ive jobs in the ser­vice sec­tor, cap­able of absorb­ing the work­force rendered redund­ant in agri­cul­ture and industry. Dou­glass North emphas­ises that the sur­viv­al of such a sys­tem depends on the evol­u­tion of its insti­tu­tions to man­age these trans­itions29.

AI could bring about changes com­par­able to those of pre­vi­ous indus­tri­al revolutions

Because it is cog­nit­ive, the AI revolu­tion will spread more rap­idly than that of the engine — which makes inter­ven­tion all the more urgent: a 60-year ‘Engels pause’ would be unac­cept­able in the 21st cen­tury, even if shortened. In its phys­ic­al applic­a­tions, AI relies on exist­ing robot archi­tec­tures or those that can be repro­grammed with­in a rel­at­ively short time­frame. Like elec­tri­city, AI is a cross-cut­ting tech­no­logy cap­able of massively increas­ing pro­ductiv­ity across all sec­tors. It is all a ques­tion of the pace of deploy­ment and real-world impact, but the risk exists and must be ana­lysed and anti­cip­ated. AI could bring about changes com­par­able to those of pre­vi­ous indus­tri­al revolu­tions. It is more a ques­tion of time, par­tic­u­larly in a Europe that has under­in­ves­ted in tech­no­logy and R&D for over 10 years. Fur­ther­more, for the past 20 years, the inter­net revolu­tion has con­cen­trated profits among a small num­ber of play­ers cap­able of invest­ing heav­ily in indus­tri­al and human cap­it­al. This accu­mu­la­tion of cap­it­al enables Amer­ic­an Big Tech to pos­i­tion itself at the fore­front of AI, by acquir­ing rival start-ups, recruit­ing the best tal­ent and secur­ing the elec­tric­al resources needed for deploy­ment. These strategies are remin­is­cent of those of the oil oli­go­pol­ies of the second indus­tri­al revolu­tion, which were ulti­mately dis­mantled at the start of the 20th century. 

Faced with the risk of being left behind, we struggle to envis­age future sec­tors to fall back on. Although an age­ing pop­u­la­tion, edu­ca­tion, leis­ure and health­care are power­ful drivers of employ­ment, they are cur­rently poorly paid and unat­tract­ive. At the same time, tech­no­logy is reignit­ing geo­pol­it­ic­al ten­sions: it is, for example, one of the root causes of Sino-Amer­ic­an fric­tion. We must there­fore pre­pare for a dual revolu­tion — the intern­al impact of tech­no­logy on our pro­duc­tion sys­tems and the extern­al effects of a new dis­tri­bu­tion of wealth. As Acemo­glu and John­son point out30, fol­low­ing on from Kor­inek and Stiglitz31, adjust­ment is unlikely to occur spon­tan­eously and requires us to ‘steer’ tech­nic­al pro­gress32. This pos­i­tion is con­tested by those who fear the risks of reg­u­lat­ory cap­ture; nev­er­the­less, it raises an inter­est­ing ques­tion that we prob­ably need to anticipate.

It is there­fore prudent to work on innov­at­ive paradigms, par­tic­u­larly in terms of insti­tu­tions and fin­an­cing. The options fre­quently pro­posed all rep­res­ent a break with exist­ing sys­tems: uni­ver­sal basic income, free access to pub­lic ser­vices, and a delib­er­ate reduc­tion in pro­ductiv­ity through short­er work­ing hours. It is too early to take a defin­it­ive stance, but these pro­grammes still need to be fin­anced. The issue appears straight­for­ward, since rising pro­ductiv­ity increases the pool of wealth to be dis­trib­uted. There is talk of taxes on robots33 and pro­duct­ive cap­it­al, or even on data, since tax­ing labour alone will be insuf­fi­cient34. Or the cre­ation of a very large social pro­tec­tion fund, man­aged like a pen­sion fund and fun­ded by con­tri­bu­tions from new com­pan­ies, in return for the pro­vi­sion of the social infra­struc­ture to which they owe their suc­cess — mod­elled on the Alaska Per­man­ent Fund, which redis­trib­utes a frac­tion of the state’s oil rev­en­ues to cit­izens, or the aca­dem­ic pro­pos­als for a ‘cit­izen fund’ developed by John Roe­mer and taken up by Daron Acemo­glu. In the same vein, Dav­id Autor and Neil Thompson (MIT) pro­pose a ‘Uni­ver­sal Basic Cap­it­al’: every child would receive a port­fo­lio of pro­duct­ive assets at birth, grant­ing them a per­man­ent right to cap­it­al income — includ­ing income from auto­ma­tion — without rely­ing on polit­ic­ally vul­ner­able budget­ary trans­fers. Fur­ther ideas are needed, as the sums men­tioned here do not meas­ure up to the chal­lenges at stake. Moreover, these pro­pos­als require a glob­al con­sensus due to the mobil­ity of cap­it­al and entre­pren­eurs. This debate must take place before undesir­able solu­tions are imposed on us by devel­op­ments in the labour mar­ket where we would be ‘acted upon’ rather than ‘act­ing’.

The eco­nom­ist Dav­id Autor puts it this way: AI could rebuild the middle class by amp­li­fy­ing what humans do best. Bryn­jolfs­son reminds us, with data to back it up, that for the time being, it is the young­est who are pay­ing the price of the trans­ition. Between these two truths, there is no con­tra­dic­tion — there is an agenda.

AI is an amp­li­fi­er. It amp­li­fies the pro­ductiv­ity of organ­isa­tions that pre­pare for it; it amp­li­fies inequal­it­ies in those that do not. It can amp­li­fy the value of human judge­ment; it can also amp­li­fy the vul­ner­ab­il­ity of those who lack access to train­ing. The real ques­tion is not a tech­no­lo­gic­al one. It is a polit­ic­al one: do we have the col­lect­ive will to make train­ing as urgent a pri­or­ity as invest­ment in digit­al infra­struc­ture? And to find cre­at­ive and real­ist­ic solu­tions to fund the trans­form­a­tion of the sys­tem. The answer to this ques­tion will determ­ine wheth­er the AI dividend bene­fits a few — or everyone.

1Tor­sten Slok, Rajvi Shah and Shruti Gal­wank­ar., Quan­ti­fy­ing the Pro­ductiv­ity Gains from AI Adop­tion, Apollo Glob­al Man­age­ment, jan­vi­er 2026.
2Early Sig­nals of AI Impact, tableau de bord en temps réel (jobsdata​.ai), mars 2026. Agrège 460+ sources / 18 indic­ateurs. Dia­gnost­ic : adop­tion accélérée, pro­ductiv­ité en hausse, postes d’exécu­tion en com­pres­sion — aucun déplace­ment mac­roé­conomique agrégé encore mesur­able.
3Fil­ippo Bontadini, Car­ol Cor­rado, Jonath­an Haskel et al., AI as an Innov­a­tion in the Meth­od of Innov­a­tion, SPRU/Imperial, octobre 2025.
4Anjana Susarla, A Gap in AI Adop­tion? Moravec and the AI Pro­ductiv­ity Para­dox, For­bes, jan­vi­er 2026.
5Har­vard Busi­ness Review, AI Does­n’t Reduce Work — It Intens­i­fies It, fév­ri­er 2026.
6Acemo­glu, Autor, John­son, op. cit. ; MIT Sloan, Pro-Work­er AI Does­n’t Just Hap­pen, 2026. Sur la dis­tinc­tion aug­ment­a­tion / auto­mat­isa­tion comme vari­able déter­min­ante des effets soci­aux de l’IA.
7Acemo­glu, Autor, John­son, op. cit. Les auteurs dis­tinguent cinq catégor­ies de tech­no­lo­gies (aug­ment­ant le trav­ail, le cap­it­al, auto­ma­t­is­antes, nivelant l’ex­pert­ise, créant de nou­velles tâches). Seule cette dernière est « unam­bigu­ously pro-work­er ».
8Erik Bryn­jolfs­son, Bhar­at Chandar, Ruyu Chen, Canar­ies in the Coal Mine? Six Facts about the Recent Employ­ment Effects of AI, Stan­ford Digit­al Eco­nomy Lab, août 2025.
9Ibid. Les don­nées ADP couvrent plusieurs dizaines de mil­lions de salar­iés améri­cains — base empirique la plus gran­u­laire dispon­ible sur l’im­pact de l’IA sur l’emploi des jeunes trav­ail­leurs.
10Fin­an­cial Times, The Great Gradu­ate Job Drought, 2026.
11Budget Lab at Yale, Labor Mar­ket AI Expos­ure: What Do We Know?, fév­ri­er 2026 ; Gad Levan­on, AI Expos­ure and Labor Mar­ket Change: What Charts Do and Don’t Show, Linked­In Pulse, 2026. À noter : Nata­sha Sar­in, présid­ente du Budget Lab, pré­cise dans une con­ver­sa­tion pub­liée par le New York Times (4 fév­ri­er 2026) que les don­nées de l’in­sti­tu­tion ne montrent pas de différence sig­ni­fic­at­ive entre les niveaux d’emploi dans les méti­ers les plus exposés à l’IA et les moins exposés.
12Dav­id Autor, Ant­on Kor­inek, Nata­sha Sar­in, entre­tien con­duit par Dav­id Leon­hardt, « What Eco­nom­ists Really Think About A.I. and Jobs », The New York Times, 4 fév­ri­er 2026. Autor y souligne que le ralen­tisse­ment de l’embauche dans les méti­ers exposés com­mence avant Chat­G­PT et que la hausse des taux Fed est une explic­a­tion con­cur­rente. Sar­in y indique que le Budget Lab ne détecte pas de différence sig­ni­fic­at­ive entre méti­ers exposés et non exposés. Autor et Neil Thompson y esquis­sent le concept d’« Uni­ver­sal Basic Cap­it­al ».
13Expos­i­tion à l’IA générat­ive et emploi : applic­a­tion à la clas­si­fic­a­tion socio-pro­fes­sion­nelle française, 2024 ; Zety, Les méti­ers en France face au risque de l’IA (2026–30).
14Dav­id Autor, AI and the Future of Work, Issues​.org ; NBER work­ing paper w32140, Apply­ing AI to Rebuild Middle Class Jobs.
15MIT Ini­ti­at­ive on the Digit­al Eco­nomy, Effect­ive AI in the Work­place: What the Research Shows, Medi­um, fév­ri­er 2026 ; SYPart­ners, Still Load­ing: Humans at Work in the AI Age, 2026.
16Matt Sigel­man et al., Bey­ond the Bin­ary: How Auto­ma­tion and Aug­ment­a­tion Are Com­bin­ing to Reshape Work, jan­vi­er 2026.
17Le Figaro Décideurs, IA, recher­che d’ef­fica­cité : les man­agers inter­mé­di­aires sont-ils une espèce en voie de dis­par­i­tion ?, fév­ri­er 2026.
18McKin­sey Glob­al Insti­tute, Human Skills Will Mat­ter More Than Ever in the Age of AI, 2025 ; Eric Hazan, Anu Madgavkar, Michael Chui et al., A New Future of Work: The Race to Deploy AI and Raise Skills in Europe and Bey­ond, MGI, 2024.
19MIT Sloan, Pro-Work­er AI Does­n’t Just Hap­pen: Com­pan­ies Need to Act, 2026 ; Daron Acemo­glu, Dav­id Autor, Simon John­son, Build­ing Pro-Work­er Arti­fi­cial Intel­li­gence, The Hamilton Pro­ject / Brook­ings, NBER Work­ing Paper 34854, fév­ri­er 2026.
20Acemo­glu, Autor, John­son, op. cit. Les auteurs iden­ti­fi­ent la path depend­ence, l’idéo­lo­gie pro-AGI des labor­atoires et la con­cen­tra­tion du sec­teur comme fac­teurs de sous-inves­t­isse­ment struc­turel dans l’IA pro-trav­ail­leur.
21Steven Randazzo (Warwick/Harvard Busi­ness School WP 26–036), Cyborgs, Cen­taurs and Self-Auto­mat­ors: The Three Modes of Human-GenAI Know­ledge Work and Their Implic­a­tions for Skilling, 2025.
22Anselm Küsters et Ben­jamin Schneider, What is Tech­no­lo­gic­al Unem­ploy­ment?, Oxford Eco­nom­ic and Social His­tory Work­ing Paper n° 218, mars 2025.
23Which Skills Mat­ter Now? A Data-Driv­en Frame­work for K‑12 in the Age of AI, V5.1, 2025
24Danny Buerkli (Wind­fall Trust) et Daniel Suss­kind (Gre­sham Col­lege, Oxford), Secur­ing human­ity’s AI future — Eco­nom­ic Scen­ari­os for Trans­form­at­ive AI, Lon­dres, 5 fév­ri­er 2026. Con­stru­it sur des travaux antérieurs avec Bryn­jolfs­son, Kor­inek et Agraw­al.
25Des accords de com­pens­a­tion col­lect­ive exist­ent déjà dans ce domaine : le syn­dicat améri­cain SAG-AFTRA (Screen Act­ors Guild – Amer­ic­an Fed­er­a­tion of Tele­vi­sion and Radio Artists, sagaftra​.org) a obtenu en 2023, après une grève de 118 jours, que les stu­di­os s’en­ga­gent con­trac­tuelle­ment à recueil­lir le con­sente­ment des acteurs et à les indem­niser avant toute util­isa­tion de leur image ou voix pour l’en­traîne­ment de mod­èles d’IA. Ce précédent montre que des mécan­ismes sec­tor­i­els de com­pens­a­tion sont opéra­tion­nels — même si leur exten­sion aux pro­fes­sions cog­nit­ives reste un chanti­er ouvert.
26Acemo­glu, Autor, John­son, op. cit. Neuf dir­ec­tions poli­tiques ana­lysées : anti­trust, pro­tec­tion de l’ex­pert­ise des trav­ail­leurs, insti­tu­tion­nal­isa­tion de leur voix, focus sec­tor­i­el santé et édu­ca­tion.
27Robert C. Allen (2009), « Engels’ Pause: A Pess­im­ist’s Guide to the Brit­ish Indus­tri­al Revolu­tion », Explor­a­tions in Eco­nom­ic His­tory, vol. 46.
28Karl Pola­nyi, La Grande Trans­form­a­tion (1944) : Pola­nyi décrit le pro­ces­sus par lequel le marché, au XIXe siècle, a cessé d’être un outil au ser­vice de la société pour devenir une force autonome qui impose ses règles. La désen­castrement de l’é­conomie crée selon lui une instabil­ité sociale struc­turelle qui appelle, tôt ou tard, un mouvement de ré-encastrement.
29North, D. C. (1990). Insti­tu­tions, Insti­tu­tion­al Change and Eco­nom­ic Per­form­ance. Cam­bridge Uni­ver­sity Press.
30Acemo­glu, D., & John­son, S. (2023). Power and Pro­gress: Our Thou­sand-Year Struggle Over Tech­no­logy and Prosper­ity. Pub­li­cAf­fairs.
31Kor­inek, A., & Stiglitz, J. E. (2021). « Steer­ing Tech­no­lo­gic­al Pro­gress », AEA Papers and Pro­ceed­ings, vol. 111.
32Acemo­glu (Power and Pro­gress, 2023) sou­tient que la dir­ec­tion du pro­grès tech­nique est un choix poli­tique. Si l’IA est ori­entée unique­ment vers l’auto­mat­isa­tion (rem­place­ment de tâches), elle comprime les emplois et con­centre les gains.
33Plusieurs économ­istes ont pro­posé une tax­a­tion des robots ou des sys­tèmes d’IA pour fin­an­cer la trans­ition sociale, dont Robert Shiller (« Robot­iz­a­tion of the Work­force and the Need for a Robot Tax », 2017) et Bill Gates (inter­view Quartz, fév­ri­er 2017 : « The robot that takes your job should pay taxes »). L’idée reste con­tro­ver­sée : ses détrac­teurs craignent qu’elle décour­age l’in­nov­a­tion.
34Kor­inek, A., & Stiglitz, J. E. (2021). « Steer­ing Tech­no­lo­gic­al Pro­gress », op. cit. Les auteurs soulignent qu’en l’ab­sence d’in­ter­ven­tion, les gains de pro­ductiv­ité liés à l’IA béné­fi­ci­eront struc­turelle­ment aux déten­teurs de cap­it­al et non aux trav­ail­leurs.

Support accurate information rooted in the scientific method.

Donate