Home / Chroniques / AI : how to protect ourselves from technological Stockholm syndrome
A smartphone bound by chains represents addiction. Generated by AI
Généré par l'IA / Generated using AI
π Digital π Society

AI : how to protect ourselves from technological Stockholm syndrome

Hamilton Mann
Hamilton Mann
Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD
Key takeaways
  • Digital technologies can be a threat to individual autonomy and free will, to the point of making people forget that they have become alienated.
  • Replacing machines perceived as a source of aggression with machines perceived as a source of comfort is similar to a technological “Stockholm syndrome”.
  • While digital innovation is perceived as inherently positive, it can nevertheless be both emancipatory and alienating, depending on the conditions under which it is adopted.
  • To remain focused on our humanity, artificial intelligence must be based on a form of simulated artificial integrity, built with reference to human values.
  • Artificial integrity relies on the ability to prevent and limit functional integrity gaps, which is a prerequisite for ensuring that the benefits of digital technologies are not built at the expense of humans.

The adop­tion of dig­i­tal tech­nolo­gies can­not be reduced to a sim­ple ratio­nal deci­sion or a func­tion­al evo­lu­tion of prac­tices and uses. It detach­es indi­vid­u­als, grad­u­al­ly or oth­er­wise, from their ini­tial frames of ref­er­ence and habit­u­al struc­tures, immers­ing them in envi­ron­ments gov­erned by exter­nal log­ics imposed by the tech­nol­o­gy itself. This shift rep­re­sents a pro­found recon­fig­u­ra­tion of indi­vid­u­als’ cog­ni­tive, social and behav­iour­al struc­tures, under the influ­ence of algo­rith­mic and pre­scrip­tive log­ics that sup­plant their own frames of ref­er­ence. This process of tech­no­log­i­cal tran­si­tion, far from being neu­tral, is akin to a form of sym­bol­ic cap­tiv­i­ty in which indi­vid­u­als, con­front­ed with the vio­lence of change, acti­vate psy­cho­log­i­cal defence mech­a­nisms in response to what they per­ceive as an attack on their auton­o­my, free will and iden­ti­ty integrity.

When adop­tion is deemed suc­cess­ful, it means that the ini­tial defence struc­tures have giv­en way: the user has not only inte­grat­ed the rules imposed by tech­nol­o­gy but has devel­oped a form of emo­tion­al iden­ti­fi­ca­tion with it, rein­ter­pret­ing the ori­gin of the con­straint as a cho­sen rela­tion­ship. At this stage, a new nor­mal is estab­lished. This shift marks the replace­ment of the old frame of ref­er­ence with that of the machine, which is now per­ceived as famil­iar and reas­sur­ing. The ini­tial aggres­sion is repressed, and the new cog­ni­tive automa­tisms become objects of defence.

These stim­uli acti­vate emo­tion­al con­fir­ma­tion bias and trans­form coer­cion into per­ceived benevolence.

This phe­nom­e­non, which can be likened to “Stock­holm syn­drome” in the rela­tion­ship between humans and machines, involves a dis­lo­ca­tion of cog­ni­tive ref­er­ences, fol­lowed by an emo­tion­al recon­fig­u­ra­tion in which the vic­tim comes to pro­tect their tech­no­log­i­cal aggres­sor. The cog­ni­tive enslave­ment pro­duced in this way is not a side effect, it’s a sur­vival mech­a­nism, fuelled by the brain’s attempts to reduce the stress gen­er­at­ed by the intru­sion of a for­eign thought frame­work. This emo­tion­al rewrit­ing ensures a form of inter­nal coher­ence in the face of tech­no­log­i­cal alien­ation. The user’s atten­tion is then divert­ed from the ini­tial vio­lence and focused on the pos­i­tive sig­nals emit­ted by the machine: social val­i­da­tion, algo­rith­mic grat­i­fi­ca­tion, play­ful rewards. These stim­uli acti­vate emo­tion­al con­fir­ma­tion bias and trans­form coer­cion into per­ceived benevolence.

Increased risk of technological dependency

Through a process of neur­al plas­tic­i­ty, brain cir­cuits reor­gan­ise our per­cep­tion of our rela­tion­ship with machines: what was once stress­ful becomes nor­mal; what was once dom­i­na­tion becomes sup­port; and what was once an aggres­sor becomes a com­pan­ion. A rever­sal of the pow­er struc­ture is tak­ing place through the recon­fig­u­ra­tion of the nucle­us accum­bens and the pre­frontal cor­tex, anchor­ing a new coer­cive emo­tion­al rela­tion­ship, a phe­nom­e­non increas­ing­ly recog­nised with­in the com­put­er sci­ence com­mu­ni­ty as a form of “com­pu­ta­tion­al agency”, in which soft­ware active­ly recon­fig­ures per­cep­tion, behav­iour and emo­tion­al judge­ment. This phe­nom­e­non rep­re­sents one of the fun­da­men­tal dan­gers that arti­fi­cial intel­li­gence pos­es to human­i­ty: the nor­mal­i­sa­tion of men­tal depen­dence as a vec­tor of social accept­abil­i­ty. This is why it is not enough to design arti­fi­cial­ly intel­li­gent sys­tems: it is imper­a­tive to equip them with arti­fi­cial integri­ty, which guar­an­tees human cog­ni­tive sovereignty.

Some argue that dig­i­tal tech­nol­o­gy con­tributes to the empow­er­ment of vul­ner­a­ble indi­vid­u­als. This argu­ment masks a more dis­turb­ing real­i­ty: tech­no­log­i­cal depen­dence is often pre­sent­ed as regained auton­o­my, when in fact it is based on the pri­or col­lapse of mech­a­nisms of iden­ti­ty self-defence. Even when tech­nol­o­gy aims to restore rel­a­tive auton­o­my, the process of cog­ni­tive impo­si­tion remains active, facil­i­tat­ed by weak defence mech­a­nisms. Users, lack­ing resis­tance, adhere all the more quick­ly and deeply to the frame­work imposed by the machine. What­ev­er the case, tech­nol­o­gy shapes a new cog­ni­tive envi­ron­ment. The only dif­fer­ence is the degree of integri­ty of the pre-exist­ing men­tal frame­work: the stronger the frame­work, the stronger the resis­tance; the weak­er it is, the faster tech­no­log­i­cal infil­tra­tion occurs. The para­dox that pre­vents sys­temic recog­ni­tion of this syn­drome is that of inno­va­tion itself. Per­ceived as inher­ent­ly pos­i­tive, it con­ceals its ambiva­lent poten­tial: it can both eman­ci­pate and alien­ate, depend­ing on the con­di­tions of its adoption.

Assessing the artificial integrity of digital systems

For arti­fi­cial intel­li­gence to enhance our human­i­ty with­out dilut­ing it, it must go beyond its abil­i­ty to mim­ic cog­ni­tion and be ground­ed in and guid­ed by arti­fi­cial integri­ty, to respect indi­vid­u­als’ men­tal, emo­tion­al and iden­ti­ty free­doms. Tech­nol­o­gy can alle­vi­ate pain, lim­it risk and improve lives. But no progress should come at the cost of a cog­ni­tive debt that would ruin our abil­i­ty to think for our­selves and, with it, our rela­tion­ship with our own human­i­ty. Assess­ing the arti­fi­cial integri­ty of dig­i­tal sys­tems, par­tic­u­lar­ly those incor­po­rat­ing arti­fi­cial intel­li­gence, must become a cen­tral require­ment in any dig­i­tal trans­for­ma­tion. This requires the imple­men­ta­tion of func­tion­al cog­ni­tive pro­tec­tion mech­a­nisms designed to pre­vent the emer­gence, lim­it the impact or elim­i­nate func­tion­al integri­ty gaps, with a view to pre­serv­ing the cog­ni­tive, emo­tion­al and iden­ti­ty com­plex­i­ty of human beings.

#1 Functional diversion

Using tech­nol­o­gy for pur­pos­es or in roles not intend­ed by the design­er or user organ­i­sa­tion can ren­der the software’s usage log­ic and inter­nal gov­er­nance modes inef­fec­tive or inef­fi­cient, there­by cre­at­ing func­tion­al and rela­tion­al con­fu­sion1.

Exam­ple: A chat­bot designed to answer ques­tions about com­pa­ny HR pol­i­cy is used as a sub­sti­tute for a human hier­ar­chy for con­flict man­age­ment or task allocation.

#2 Functional void

The absence of nec­es­sary steps or func­tions, because they have not been devel­oped and are there­fore not present in the technology’s oper­at­ing log­ic, cre­ates a “func­tion­al void” with regard to the usage of the user2.

Exam­ple: Con­tent gen­er­a­tion tech­nol­o­gy (such as gen­er­a­tive AI) that does not allow con­tent to be export­ed direct­ly in a usable for­mat (Word, PDF, CMS) in the expect­ed qual­i­ty, there­by lim­it­ing or block­ing its oper­a­tional use.

#3 Functional security

The absence of safe­guards, human val­i­da­tion steps or infor­ma­tion mes­sages when the sys­tem per­forms an action with irre­versible effects that may not cor­re­spond to the user’s inten­tion3.

Exam­ple: A mar­ket­ing tech­nol­o­gy auto­mat­i­cal­ly sends emails to a list of con­tacts with­out any mech­a­nism to block the send­ing, request user ver­i­fi­ca­tion or gen­er­ate an infor­ma­tion alert to the user in the absence of con­fir­ma­tion of a cri­te­ri­on that deter­mines the safe­ty and qual­i­ty of the send­ing: the cor­rect mail­ing list.

#4 Functional alienation

The cre­ation of auto­mat­ic behav­iours or con­di­tioned reflex­es sim­i­lar to Pavlov­ian reflex­es can reduce or destroy the user’s abil­i­ty to think and judge, lead­ing to an ero­sion of their deci­sion-mak­ing sov­er­eign­ty4.

Exam­ple: Sys­tem­at­ic accep­tance of cook­ies or blind val­i­da­tion of sys­tem alerts by cog­ni­tive­ly fatigued users.

#5 Functional ideology

Emo­tion­al depen­dence on tech­nol­o­gy can lead to the alter­ation or neu­tral­i­sa­tion of crit­i­cal think­ing, as well as the men­tal con­struc­tion of an ide­ol­o­gy that fuels the emer­gence of dis­course that rel­a­tivis­es, ratio­nalis­es or col­lec­tive­ly denies its prop­er func­tion­ing or mal­func­tion­ing5.

Exam­ple: Jus­ti­fi­ca­tion of fail­ures or errors spe­cif­ic to the func­tion­ing of tech­nol­o­gy with argu­ments such as “It’s not the tool’s fault” or “The tool can’t guess what the user has forgotten”.

#6 Functional cultural consistency 

The antin­o­my and con­tra­dic­to­ry injunc­tion between the log­i­cal frame­work imposed or influ­enced by tech­nol­o­gy and the val­ues or behav­iour­al prin­ci­ples pro­mot­ed by the organ­i­sa­tion­al cul­ture can cre­ate ten­sions6.

Exam­ple: Tech­no­log­i­cal work­flow that leads to the cre­ation of teams to val­i­date and con­trol the work done by oth­ers in an organ­i­sa­tion that pro­motes and val­ues team empowerment.

#7 Functional transparency

If the deci­sion-mak­ing mech­a­nisms or algo­rith­mic log­ic behind how the tech­nol­o­gy works are not trans­par­ent or acces­si­ble to the user, this may pre­vent the user from antic­i­pat­ing, over­com­ing or over­rid­ing the user’s inten­tion7.

Exam­ple: Pre­s­e­lec­tion of can­di­dates by tech­nol­o­gy that man­ages con­flicts and arbi­trates between user-defined selec­tion cri­te­ria (expe­ri­ence, qual­i­fi­ca­tions, soft skills) with­out the weight­ing or exclu­sion rules being explic­it­ly vis­i­ble, mod­i­fi­able and ver­i­fi­able by the user.

#8 Functional addiction

The pres­ence of fea­tures based on gam­i­fi­ca­tion, imme­di­ate grat­i­fi­ca­tion or micro-reward sys­tems cal­i­brat­ed to hack the user’s moti­va­tion cir­cuits can acti­vate neu­ro­log­i­cal reward mech­a­nisms to stim­u­late repet­i­tive, com­pul­sive and addic­tive behav­iours, lead­ing to emo­tion­al decom­pen­sa­tion and self-rein­forc­ing cycles8.

Exam­ple: Noti­fi­ca­tions, likes, infi­nite scroll algo­rithms, visu­al or audio bonus­es, mile­stones reached through point mechan­ics, badges, lev­els or scores to main­tain expo­nen­tial and last­ing engagement.

#9 Functional ownership

The appro­pri­a­tion, reuse or pro­cess­ing of per­son­al or intel­lec­tu­al data by a tech­nol­o­gy, regard­less of its pub­lic acces­si­bil­i­ty, with­out the informed, explic­it and mean­ing­ful con­sent of its own­er or cre­ator, rais­es eth­i­cal and legal ques­tions9.

Exam­ple: An AI mod­el trained on images, text or voic­es of indi­vid­u­als found online, there­by mon­etis­ing someone’s iden­ti­ty, knowl­edge or work with­out pri­or autho­ri­sa­tion and with­out any explic­it accep­tance mech­a­nism, licence or trans­par­ent attribution.

#10 Functional bias

The inabil­i­ty of a tech­nol­o­gy to detect, mit­i­gate or pre­vent bias or dis­crim­i­na­to­ry pat­terns, whether in its design, train­ing data, deci­sion-mak­ing log­ic or deploy­ment con­text, can result in unfair treat­ment, exclu­sion or sys­temic dis­tor­tion towards indi­vid­u­als or groups10.

Exam­ple: A facial recog­ni­tion sys­tem that per­forms sig­nif­i­cant­ly less reli­ably for peo­ple with dark skin due to unbal­anced train­ing data, with­out func­tion­al safe­guards against bias or account­abil­i­ty mechanisms.

The cost of lack­ing arti­fi­cial integri­ty impacts many types of cap­i­tal, par­tic­u­lar­ly human capital.

Giv­en their inter­de­pen­dence with human sys­tems, the ten func­tion­al integri­ty gaps in arti­fi­cial integri­ty must be exam­ined through a sys­temic approach, encom­pass­ing the nano (bio­log­i­cal, neu­ro­log­i­cal), micro (indi­vid­ual, behav­iour­al), macro (organ­i­sa­tion­al, insti­tu­tion­al) and meta (cul­tur­al, ide­o­log­i­cal) lev­els11.

The cost asso­ci­at­ed with the absence of arti­fi­cial integri­ty in sys­tems, whether or not they incor­po­rate arti­fi­cial intel­li­gence, impacts var­i­ous types of cap­i­tal: human, cul­tur­al, deci­sion-mak­ing, rep­u­ta­tion­al, tech­no­log­i­cal and finan­cial. This cost man­i­fests itself in the destruc­tion of sus­tain­able val­ue, fuelled by unsus­tain­able risks and an uncon­trolled increase in the cost of cap­i­tal invest­ed to gen­er­ate returns (ROIC), trans­form­ing these tech­no­log­i­cal invest­ments into struc­tur­al hand­i­caps for the company’s prof­itabil­i­ty and, con­se­quent­ly, for its long-term via­bil­i­ty. Com­pa­nies are not adopt­ing respon­si­ble dig­i­tal trans­for­ma­tion sole­ly to meet soci­etal expec­ta­tions, but because their sus­tain­able per­for­mance depends on it and because it helps to strength­en the liv­ing fab­ric of the soci­ety that nour­ish­es them and on which they depend for growth.

1Ash, J., Kitchin, R., & Leszczyn­s­ki, A. (2018). Dig­i­tal turn, dig­i­tal geo­gra­phies? Progress in Human Geog­ra­phy, 42(1), 25–43. https://​doi​.org/​1​0​.​1​1​7​7​/​0​3​0​9​1​3​2​5​1​6​6​64800
2Ver­beek, P.-P. (2005). What things do: Philo­soph­i­cal reflec­tions on tech­nol­o­gy, agency, and design. Penn State Press.
3Per­row, C. (1999). Nor­mal acci­dents: Liv­ing with high-risk tech­nolo­gies. Prince­ton Uni­ver­si­ty Press.
4Gray, C. M., Kou, Y., Bat­tles, B., Hog­gatt, J., & Toombs, A. L. (2018). The dark (pat­terns) side of UX design. In Pro­ceed­ings of the 2018 CHI Con­fer­ence on Human Fac­tors in Com­put­ing Sys­tems (CHI ‘18) (Paper No. 534, pp. 1–14). Asso­ci­a­tion for Com­put­ing Machin­ery. https://​doi​.org/​1​0​.​1​1​4​5​/​3​1​7​3​5​7​4​.​3​1​74108
5Flori­di, L., Cowls, J., Bel­tram­et­ti, M., Chati­la, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People—An eth­i­cal frame­work for a good AI soci­ety: Oppor­tu­ni­ties, risks, prin­ci­ples, and rec­om­men­da­tions. Minds and Machines, 28(4), 689–707.
https://doi.org/10.1007/s11023-018‑9482‑5
6Ge, X., Xu, C., Mis­a­ki, D., Markus, H. R., & Tsai, J. L. (2024). How cul­ture shapes what peo­ple want from AI. Stan­ford SPARQ. https://​sparq​.stan​ford​.edu/​s​i​t​e​s​/​g​/​f​i​l​e​s​/​s​b​i​y​b​j​1​9​0​2​1​/​f​i​l​e​s​/​m​e​d​i​a​/​f​i​l​e​/​c​u​l​t​u​r​e​-​a​i.pdf
7Gutiér­rez, J. D. (2025, April 9). Why does algo­rith­mic trans­paren­cy mat­ter and what can we do about it? Open Glob­al Rights.
https://​www​.open​glob​al​rights​.org/​w​h​y​-​d​o​e​s​-​a​l​g​o​r​i​t​h​m​i​c​-​t​r​a​n​s​p​a​r​e​n​c​y​-​m​a​t​t​e​r​-​a​n​d​-​w​h​a​t​-​c​a​n​-​w​e​-​d​o​-​a​b​o​u​t-it/
8Yin, C., Wa, A., Zhang, Y., Huang, R., & Zheng, J. (2025). Explor­ing the dark pat­terns in user expe­ri­ence design for short-form videos. In N. A. Stre­itz & S. Kono­mi (Eds.), Dis­trib­uted, ambi­ent and per­va­sive inter­ac­tions (Lec­ture Notes in Com­put­er Sci­ence, Vol. 15802, pp. (pp. 330–347). Springer.
https://doi.org/10.1007/978–3‑031–92977-9_21
9Flori­di, L., Cowls, J., Bel­tram­et­ti, M., Chati­la, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People—An eth­i­cal frame­work for a good AI soci­ety: Oppor­tu­ni­ties, risks, prin­ci­ples, and rec­om­men­da­tions. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018‑9482‑5
10
Nation­al Insti­tute of Stan­dards and Tech­nol­o­gy. (2021). NIST Spe­cial Pub­li­ca­tion 1270: Towards a stan­dard for iden­ti­fy­ing and man­ag­ing bias in arti­fi­cial intel­li­gence.
https://​nvlpubs​.nist​.gov/​n​i​s​t​p​u​b​s​/​S​p​e​c​i​a​l​P​u​b​l​i​c​a​t​i​o​n​s​/​N​I​S​T​.​S​P​.​1​2​7​0.pdf
11
H. (2024). Arti­fi­cial integri­ty: The paths to lead­ing AI toward a human-cen­tered future. Wiley

Our world through the lens of science. Every week, in your inbox.

Get the newsletter