Home / Chroniques / AI: how to protect ourselves from technological Stockholm syndrome
A smartphone bound by chains represents addiction. Generated by AI
Généré par l'IA / Generated using AI
π Digital π Society

AI : how to protect ourselves from technological Stockholm syndrome

Hamilton Mann
Hamilton Mann
Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD
Key takeaways
  • Digital technologies can be a threat to individual autonomy and free will, to the point of making people forget that they have become alienated.
  • Replacing machines perceived as a source of aggression with machines perceived as a source of comfort is similar to a technological “Stockholm syndrome”.
  • While digital innovation is perceived as inherently positive, it can nevertheless be both emancipatory and alienating, depending on the conditions under which it is adopted.
  • To remain focused on our humanity, artificial intelligence must be based on a form of simulated artificial integrity, built with reference to human values.
  • Artificial integrity relies on the ability to prevent and limit functional integrity gaps, which is a prerequisite for ensuring that the benefits of digital technologies are not built at the expense of humans.

The adop­tion of digi­tal tech­no­lo­gies can­not be redu­ced to a simple ratio­nal deci­sion or a func­tio­nal evo­lu­tion of prac­tices and uses. It detaches indi­vi­duals, gra­dual­ly or other­wise, from their ini­tial frames of refe­rence and habi­tual struc­tures, immer­sing them in envi­ron­ments gover­ned by exter­nal logics impo­sed by the tech­no­lo­gy itself. This shift repre­sents a pro­found recon­fi­gu­ra­tion of indi­vi­duals’ cog­ni­tive, social and beha­viou­ral struc­tures, under the influence of algo­rith­mic and pres­crip­tive logics that sup­plant their own frames of refe­rence. This pro­cess of tech­no­lo­gi­cal tran­si­tion, far from being neu­tral, is akin to a form of sym­bo­lic cap­ti­vi­ty in which indi­vi­duals, confron­ted with the vio­lence of change, acti­vate psy­cho­lo­gi­cal defence mecha­nisms in res­ponse to what they per­ceive as an attack on their auto­no­my, free will and iden­ti­ty integrity.

When adop­tion is dee­med suc­cess­ful, it means that the ini­tial defence struc­tures have given way : the user has not only inte­gra­ted the rules impo­sed by tech­no­lo­gy but has deve­lo­ped a form of emo­tio­nal iden­ti­fi­ca­tion with it, rein­ter­pre­ting the ori­gin of the constraint as a cho­sen rela­tion­ship. At this stage, a new nor­mal is esta­bli­shed. This shift marks the repla­ce­ment of the old frame of refe­rence with that of the machine, which is now per­cei­ved as fami­liar and reas­su­ring. The ini­tial aggres­sion is repres­sed, and the new cog­ni­tive auto­ma­tisms become objects of defence.

These sti­mu­li acti­vate emo­tio­nal confir­ma­tion bias and trans­form coer­cion into per­cei­ved benevolence.

This phe­no­me­non, which can be like­ned to “Stock­holm syn­drome” in the rela­tion­ship bet­ween humans and machines, involves a dis­lo­ca­tion of cog­ni­tive refe­rences, fol­lo­wed by an emo­tio­nal recon­fi­gu­ra­tion in which the vic­tim comes to pro­tect their tech­no­lo­gi­cal aggres­sor. The cog­ni­tive ensla­ve­ment pro­du­ced in this way is not a side effect, it’s a sur­vi­val mecha­nism, fuel­led by the brain’s attempts to reduce the stress gene­ra­ted by the intru­sion of a forei­gn thought fra­me­work. This emo­tio­nal rewri­ting ensures a form of inter­nal cohe­rence in the face of tech­no­lo­gi­cal alie­na­tion. The user’s atten­tion is then diver­ted from the ini­tial vio­lence and focu­sed on the posi­tive signals emit­ted by the machine : social vali­da­tion, algo­rith­mic gra­ti­fi­ca­tion, play­ful rewards. These sti­mu­li acti­vate emo­tio­nal confir­ma­tion bias and trans­form coer­cion into per­cei­ved benevolence.

Increased risk of technological dependency

Through a pro­cess of neu­ral plas­ti­ci­ty, brain cir­cuits reor­ga­nise our per­cep­tion of our rela­tion­ship with machines : what was once stress­ful becomes nor­mal ; what was once domi­na­tion becomes sup­port ; and what was once an aggres­sor becomes a com­pa­nion. A rever­sal of the power struc­ture is taking place through the recon­fi­gu­ra­tion of the nucleus accum­bens and the pre­fron­tal cor­tex, ancho­ring a new coer­cive emo­tio­nal rela­tion­ship, a phe­no­me­non increa­sin­gly reco­gni­sed within the com­pu­ter science com­mu­ni­ty as a form of “com­pu­ta­tio­nal agen­cy”, in which soft­ware acti­ve­ly recon­fi­gures per­cep­tion, beha­viour and emo­tio­nal jud­ge­ment. This phe­no­me­non repre­sents one of the fun­da­men­tal dan­gers that arti­fi­cial intel­li­gence poses to huma­ni­ty : the nor­ma­li­sa­tion of men­tal depen­dence as a vec­tor of social accep­ta­bi­li­ty. This is why it is not enough to desi­gn arti­fi­cial­ly intel­li­gent sys­tems : it is impe­ra­tive to equip them with arti­fi­cial inte­gri­ty, which gua­ran­tees human cog­ni­tive sovereignty.

Some argue that digi­tal tech­no­lo­gy contri­butes to the empo­werment of vul­ne­rable indi­vi­duals. This argu­ment masks a more dis­tur­bing rea­li­ty : tech­no­lo­gi­cal depen­dence is often pre­sen­ted as regai­ned auto­no­my, when in fact it is based on the prior col­lapse of mecha­nisms of iden­ti­ty self-defence. Even when tech­no­lo­gy aims to res­tore rela­tive auto­no­my, the pro­cess of cog­ni­tive impo­si­tion remains active, faci­li­ta­ted by weak defence mecha­nisms. Users, lacking resis­tance, adhere all the more qui­ck­ly and dee­ply to the fra­me­work impo­sed by the machine. Wha­te­ver the case, tech­no­lo­gy shapes a new cog­ni­tive envi­ron­ment. The only dif­fe­rence is the degree of inte­gri­ty of the pre-exis­ting men­tal fra­me­work : the stron­ger the fra­me­work, the stron­ger the resis­tance ; the wea­ker it is, the fas­ter tech­no­lo­gi­cal infil­tra­tion occurs. The para­dox that pre­vents sys­te­mic recog­ni­tion of this syn­drome is that of inno­va­tion itself. Per­cei­ved as inhe­rent­ly posi­tive, it conceals its ambi­va­lent poten­tial : it can both eman­ci­pate and alie­nate, depen­ding on the condi­tions of its adoption.

Assessing the artificial integrity of digital systems

For arti­fi­cial intel­li­gence to enhance our huma­ni­ty without dilu­ting it, it must go beyond its abi­li­ty to mimic cog­ni­tion and be groun­ded in and gui­ded by arti­fi­cial inte­gri­ty, to res­pect indi­vi­duals’ men­tal, emo­tio­nal and iden­ti­ty free­doms. Tech­no­lo­gy can alle­viate pain, limit risk and improve lives. But no pro­gress should come at the cost of a cog­ni­tive debt that would ruin our abi­li­ty to think for our­selves and, with it, our rela­tion­ship with our own huma­ni­ty. Asses­sing the arti­fi­cial inte­gri­ty of digi­tal sys­tems, par­ti­cu­lar­ly those incor­po­ra­ting arti­fi­cial intel­li­gence, must become a cen­tral requi­re­ment in any digi­tal trans­for­ma­tion. This requires the imple­men­ta­tion of func­tio­nal cog­ni­tive pro­tec­tion mecha­nisms desi­gned to prevent the emer­gence, limit the impact or eli­mi­nate func­tio­nal inte­gri­ty gaps, with a view to pre­ser­ving the cog­ni­tive, emo­tio­nal and iden­ti­ty com­plexi­ty of human beings.

#1 Functional diversion

Using tech­no­lo­gy for pur­poses or in roles not inten­ded by the desi­gner or user orga­ni­sa­tion can ren­der the software’s usage logic and inter­nal gover­nance modes inef­fec­tive or inef­fi­cient, the­re­by crea­ting func­tio­nal and rela­tio­nal confu­sion1.

Example : A chat­bot desi­gned to ans­wer ques­tions about com­pa­ny HR poli­cy is used as a sub­sti­tute for a human hie­rar­chy for conflict mana­ge­ment or task allocation.

#2 Functional void

The absence of neces­sa­ry steps or func­tions, because they have not been deve­lo­ped and are the­re­fore not present in the technology’s ope­ra­ting logic, creates a “func­tio­nal void” with regard to the usage of the user2.

Example : Content gene­ra­tion tech­no­lo­gy (such as gene­ra­tive AI) that does not allow content to be expor­ted direct­ly in a usable for­mat (Word, PDF, CMS) in the expec­ted qua­li­ty, the­re­by limi­ting or blo­cking its ope­ra­tio­nal use.

#3 Functional security

The absence of safe­guards, human vali­da­tion steps or infor­ma­tion mes­sages when the sys­tem per­forms an action with irre­ver­sible effects that may not cor­res­pond to the user’s inten­tion3.

Example : A mar­ke­ting tech­no­lo­gy auto­ma­ti­cal­ly sends emails to a list of contacts without any mecha­nism to block the sen­ding, request user veri­fi­ca­tion or gene­rate an infor­ma­tion alert to the user in the absence of confir­ma­tion of a cri­te­rion that deter­mines the safe­ty and qua­li­ty of the sen­ding : the cor­rect mai­ling list.

#4 Functional alienation

The crea­tion of auto­ma­tic beha­viours or condi­tio­ned reflexes simi­lar to Pav­lo­vian reflexes can reduce or des­troy the user’s abi­li­ty to think and judge, lea­ding to an ero­sion of their deci­sion-making sove­rei­gn­ty4.

Example : Sys­te­ma­tic accep­tance of cookies or blind vali­da­tion of sys­tem alerts by cog­ni­ti­ve­ly fati­gued users.

#5 Functional ideology

Emo­tio­nal depen­dence on tech­no­lo­gy can lead to the alte­ra­tion or neu­tra­li­sa­tion of cri­ti­cal thin­king, as well as the men­tal construc­tion of an ideo­lo­gy that fuels the emer­gence of dis­course that rela­ti­vises, ratio­na­lises or col­lec­ti­ve­ly denies its pro­per func­tio­ning or mal­func­tio­ning5.

Example : Jus­ti­fi­ca­tion of fai­lures or errors spe­ci­fic to the func­tio­ning of tech­no­lo­gy with argu­ments such as “It’s not the tool’s fault” or “The tool can’t guess what the user has forgotten”.

#6 Functional cultural consistency 

The anti­no­my and contra­dic­to­ry injunc­tion bet­ween the logi­cal fra­me­work impo­sed or influen­ced by tech­no­lo­gy and the values or beha­viou­ral prin­ciples pro­mo­ted by the orga­ni­sa­tio­nal culture can create ten­sions6.

Example : Tech­no­lo­gi­cal work­flow that leads to the crea­tion of teams to vali­date and control the work done by others in an orga­ni­sa­tion that pro­motes and values team empowerment.

#7 Functional transparency

If the deci­sion-making mecha­nisms or algo­rith­mic logic behind how the tech­no­lo­gy works are not trans­pa­rent or acces­sible to the user, this may prevent the user from anti­ci­pa­ting, over­co­ming or over­ri­ding the user’s inten­tion7.

Example : Pre­se­lec­tion of can­di­dates by tech­no­lo­gy that manages conflicts and arbi­trates bet­ween user-defi­ned selec­tion cri­te­ria (expe­rience, qua­li­fi­ca­tions, soft skills) without the weigh­ting or exclu­sion rules being expli­cit­ly visible, modi­fiable and veri­fiable by the user.

#8 Functional addiction

The pre­sence of fea­tures based on gami­fi­ca­tion, imme­diate gra­ti­fi­ca­tion or micro-reward sys­tems cali­bra­ted to hack the user’s moti­va­tion cir­cuits can acti­vate neu­ro­lo­gi­cal reward mecha­nisms to sti­mu­late repe­ti­tive, com­pul­sive and addic­tive beha­viours, lea­ding to emo­tio­nal decom­pen­sa­tion and self-rein­for­cing cycles8.

Example : Noti­fi­ca­tions, likes, infi­nite scroll algo­rithms, visual or audio bonuses, miles­tones rea­ched through point mecha­nics, badges, levels or scores to main­tain expo­nen­tial and las­ting engagement.

#9 Functional ownership

The appro­pria­tion, reuse or pro­ces­sing of per­so­nal or intel­lec­tual data by a tech­no­lo­gy, regard­less of its public acces­si­bi­li­ty, without the infor­med, expli­cit and mea­ning­ful consent of its owner or crea­tor, raises ethi­cal and legal ques­tions9.

Example : An AI model trai­ned on images, text or voices of indi­vi­duals found online, the­re­by mone­ti­sing someone’s iden­ti­ty, know­ledge or work without prior autho­ri­sa­tion and without any expli­cit accep­tance mecha­nism, licence or trans­pa­rent attribution.

#10 Functional bias

The inabi­li­ty of a tech­no­lo­gy to detect, miti­gate or prevent bias or dis­cri­mi­na­to­ry pat­terns, whe­ther in its desi­gn, trai­ning data, deci­sion-making logic or deploy­ment context, can result in unfair treat­ment, exclu­sion or sys­te­mic dis­tor­tion towards indi­vi­duals or groups10.

Example : A facial recog­ni­tion sys­tem that per­forms signi­fi­cant­ly less relia­bly for people with dark skin due to unba­lan­ced trai­ning data, without func­tio­nal safe­guards against bias or accoun­ta­bi­li­ty mechanisms.

The cost of lacking arti­fi­cial inte­gri­ty impacts many types of capi­tal, par­ti­cu­lar­ly human capital.

Given their inter­de­pen­dence with human sys­tems, the ten func­tio­nal inte­gri­ty gaps in arti­fi­cial inte­gri­ty must be exa­mi­ned through a sys­te­mic approach, encom­pas­sing the nano (bio­lo­gi­cal, neu­ro­lo­gi­cal), micro (indi­vi­dual, beha­viou­ral), macro (orga­ni­sa­tio­nal, ins­ti­tu­tio­nal) and meta (cultu­ral, ideo­lo­gi­cal) levels11.

The cost asso­cia­ted with the absence of arti­fi­cial inte­gri­ty in sys­tems, whe­ther or not they incor­po­rate arti­fi­cial intel­li­gence, impacts various types of capi­tal : human, cultu­ral, deci­sion-making, repu­ta­tio­nal, tech­no­lo­gi­cal and finan­cial. This cost mani­fests itself in the des­truc­tion of sus­tai­nable value, fuel­led by unsus­tai­nable risks and an uncon­trol­led increase in the cost of capi­tal inves­ted to gene­rate returns (ROIC), trans­for­ming these tech­no­lo­gi­cal invest­ments into struc­tu­ral han­di­caps for the company’s pro­fi­ta­bi­li­ty and, conse­quent­ly, for its long-term via­bi­li­ty. Com­pa­nies are not adop­ting res­pon­sible digi­tal trans­for­ma­tion sole­ly to meet socie­tal expec­ta­tions, but because their sus­tai­nable per­for­mance depends on it and because it helps to streng­then the living fabric of the socie­ty that nou­rishes them and on which they depend for growth.

1Ash, J., Kit­chin, R., & Leszc­zyns­ki, A. (2018). Digi­tal turn, digi­tal geo­gra­phies ? Pro­gress in Human Geo­gra­phy, 42(1), 25–43. https://​doi​.org/​1​0​.​1​1​7​7​/​0​3​0​9​1​3​2​5​1​6​6​64800
2Ver­beek, P.-P. (2005). What things do : Phi­lo­so­phi­cal reflec­tions on tech­no­lo­gy, agen­cy, and desi­gn. Penn State Press.
3Per­row, C. (1999). Nor­mal acci­dents : Living with high-risk tech­no­lo­gies. Prin­ce­ton Uni­ver­si­ty Press.
4Gray, C. M., Kou, Y., Bat­tles, B., Hog­gatt, J., & Toombs, A. L. (2018). The dark (pat­terns) side of UX desi­gn. In Pro­cee­dings of the 2018 CHI Confe­rence on Human Fac­tors in Com­pu­ting Sys­tems (CHI ‘18) (Paper No. 534, pp. 1–14). Asso­cia­tion for Com­pu­ting Machi­ne­ry. https://​doi​.org/​1​0​.​1​1​4​5​/​3​1​7​3​5​7​4​.​3​1​74108
5Flo­ri­di, L., Cowls, J., Bel­tra­met­ti, M., Cha­ti­la, R., Cha­ze­rand, P., Dignum, V., … & Vaye­na, E. (2018). AI4People—An ethi­cal fra­me­work for a good AI socie­ty : Oppor­tu­ni­ties, risks, prin­ciples, and recom­men­da­tions. Minds and Machines, 28(4), 689–707.
https://doi.org/10.1007/s11023-018‑9482‑5
6Ge, X., Xu, C., Misa­ki, D., Mar­kus, H. R., & Tsai, J. L. (2024). How culture shapes what people want from AI. Stan­ford SPARQ. https://​sparq​.stan​ford​.edu/​s​i​t​e​s​/​g​/​f​i​l​e​s​/​s​b​i​y​b​j​1​9​0​2​1​/​f​i​l​e​s​/​m​e​d​i​a​/​f​i​l​e​/​c​u​l​t​u​r​e​-​a​i.pdf
7Gutiér­rez, J. D. (2025, April 9). Why does algo­rith­mic trans­pa­ren­cy mat­ter and what can we do about it ? Open Glo­bal Rights.
https://​www​.open​glo​bal​rights​.org/​w​h​y​-​d​o​e​s​-​a​l​g​o​r​i​t​h​m​i​c​-​t​r​a​n​s​p​a​r​e​n​c​y​-​m​a​t​t​e​r​-​a​n​d​-​w​h​a​t​-​c​a​n​-​w​e​-​d​o​-​a​b​o​u​t-it/
8Yin, C., Wa, A., Zhang, Y., Huang, R., & Zheng, J. (2025). Explo­ring the dark pat­terns in user expe­rience desi­gn for short-form videos. In N. A. Streitz & S. Kono­mi (Eds.), Dis­tri­bu­ted, ambient and per­va­sive inter­ac­tions (Lec­ture Notes in Com­pu­ter Science, Vol. 15802, pp. (pp. 330–347). Sprin­ger.
https://doi.org/10.1007/978–3‑031–92977-9_21
9Flo­ri­di, L., Cowls, J., Bel­tra­met­ti, M., Cha­ti­la, R., Cha­ze­rand, P., Dignum, V., … & Vaye­na, E. (2018). AI4People—An ethi­cal fra­me­work for a good AI socie­ty : Oppor­tu­ni­ties, risks, prin­ciples, and recom­men­da­tions. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018‑9482‑5
10
Natio­nal Ins­ti­tute of Stan­dards and Tech­no­lo­gy. (2021). NIST Spe­cial Publi­ca­tion 1270 : Towards a stan­dard for iden­ti­fying and mana­ging bias in arti­fi­cial intel­li­gence.
https://​nvl​pubs​.nist​.gov/​n​i​s​t​p​u​b​s​/​S​p​e​c​i​a​l​P​u​b​l​i​c​a​t​i​o​n​s​/​N​I​S​T​.​S​P​.​1​2​7​0.pdf
11
H. (2024). Arti­fi­cial inte­gri­ty : The paths to lea­ding AI toward a human-cen­te­red future. Wiley

Support accurate information rooted in the scientific method.

Donate