Home / Chroniques / AI: how to protect ourselves from technological Stockholm syndrome
A smartphone bound by chains represents addiction. Generated by AI
Généré par l'IA / Generated using AI
π Digital π Society

AI: how to protect ourselves from technological Stockholm syndrome

Hamilton Mann
Hamilton Mann
Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD
Key takeaways
  • Digital technologies can be a threat to individual autonomy and free will, to the point of making people forget that they have become alienated.
  • Replacing machines perceived as a source of aggression with machines perceived as a source of comfort is similar to a technological “Stockholm syndrome”.
  • While digital innovation is perceived as inherently positive, it can nevertheless be both emancipatory and alienating, depending on the conditions under which it is adopted.
  • To remain focused on our humanity, artificial intelligence must be based on a form of simulated artificial integrity, built with reference to human values.
  • Artificial integrity relies on the ability to prevent and limit functional integrity gaps, which is a prerequisite for ensuring that the benefits of digital technologies are not built at the expense of humans.

The adop­tion of digit­al tech­no­lo­gies can­not be reduced to a simple ration­al decision or a func­tion­al evol­u­tion of prac­tices and uses. It detaches indi­vidu­als, gradu­ally or oth­er­wise, from their ini­tial frames of ref­er­ence and habitu­al struc­tures, immers­ing them in envir­on­ments gov­erned by extern­al logics imposed by the tech­no­logy itself. This shift rep­res­ents a pro­found recon­fig­ur­a­tion of indi­vidu­als’ cog­nit­ive, social and beha­vi­our­al struc­tures, under the influ­ence of algorithmic and pre­script­ive logics that sup­plant their own frames of ref­er­ence. This pro­cess of tech­no­lo­gic­al trans­ition, far from being neut­ral, is akin to a form of sym­bol­ic cap­tiv­ity in which indi­vidu­als, con­fron­ted with the viol­ence of change, activ­ate psy­cho­lo­gic­al defence mech­an­isms in response to what they per­ceive as an attack on their autonomy, free will and iden­tity integrity.

When adop­tion is deemed suc­cess­ful, it means that the ini­tial defence struc­tures have giv­en way: the user has not only integ­rated the rules imposed by tech­no­logy but has developed a form of emo­tion­al iden­ti­fic­a­tion with it, rein­ter­pret­ing the ori­gin of the con­straint as a chosen rela­tion­ship. At this stage, a new nor­mal is estab­lished. This shift marks the replace­ment of the old frame of ref­er­ence with that of the machine, which is now per­ceived as famil­i­ar and reas­sur­ing. The ini­tial aggres­sion is repressed, and the new cog­nit­ive auto­mat­isms become objects of defence.

These stim­uli activ­ate emo­tion­al con­firm­a­tion bias and trans­form coer­cion into per­ceived benevolence.

This phe­nomen­on, which can be likened to “Stock­holm syn­drome” in the rela­tion­ship between humans and machines, involves a dis­lo­ca­tion of cog­nit­ive ref­er­ences, fol­lowed by an emo­tion­al recon­fig­ur­a­tion in which the vic­tim comes to pro­tect their tech­no­lo­gic­al aggressor. The cog­nit­ive enslave­ment pro­duced in this way is not a side effect, it’s a sur­viv­al mech­an­ism, fuelled by the brain’s attempts to reduce the stress gen­er­ated by the intru­sion of a for­eign thought frame­work. This emo­tion­al rewrit­ing ensures a form of intern­al coher­ence in the face of tech­no­lo­gic­al ali­en­a­tion. The user’s atten­tion is then diver­ted from the ini­tial viol­ence and focused on the pos­it­ive sig­nals emit­ted by the machine: social val­id­a­tion, algorithmic grat­i­fic­a­tion, play­ful rewards. These stim­uli activ­ate emo­tion­al con­firm­a­tion bias and trans­form coer­cion into per­ceived benevolence.

Increased risk of technological dependency

Through a pro­cess of neur­al plas­ti­city, brain cir­cuits reor­gan­ise our per­cep­tion of our rela­tion­ship with machines: what was once stress­ful becomes nor­mal; what was once dom­in­a­tion becomes sup­port; and what was once an aggressor becomes a com­pan­ion. A reversal of the power struc­ture is tak­ing place through the recon­fig­ur­a­tion of the nuc­le­us accum­bens and the pre­front­al cor­tex, anchor­ing a new coer­cive emo­tion­al rela­tion­ship, a phe­nomen­on increas­ingly recog­nised with­in the com­puter sci­ence com­munity as a form of “com­pu­ta­tion­al agency”, in which soft­ware act­ively recon­fig­ures per­cep­tion, beha­viour and emo­tion­al judge­ment. This phe­nomen­on rep­res­ents one of the fun­da­ment­al dangers that arti­fi­cial intel­li­gence poses to human­ity: the nor­m­al­isa­tion of men­tal depend­ence as a vec­tor of social accept­ab­il­ity. This is why it is not enough to design arti­fi­cially intel­li­gent sys­tems: it is imper­at­ive to equip them with arti­fi­cial integ­rity, which guar­an­tees human cog­nit­ive sovereignty.

Some argue that digit­al tech­no­logy con­trib­utes to the empower­ment of vul­ner­able indi­vidu­als. This argu­ment masks a more dis­turb­ing real­ity: tech­no­lo­gic­al depend­ence is often presen­ted as regained autonomy, when in fact it is based on the pri­or col­lapse of mech­an­isms of iden­tity self-defence. Even when tech­no­logy aims to restore rel­at­ive autonomy, the pro­cess of cog­nit­ive impos­i­tion remains act­ive, facil­it­ated by weak defence mech­an­isms. Users, lack­ing res­ist­ance, adhere all the more quickly and deeply to the frame­work imposed by the machine. Whatever the case, tech­no­logy shapes a new cog­nit­ive envir­on­ment. The only dif­fer­ence is the degree of integ­rity of the pre-exist­ing men­tal frame­work: the stronger the frame­work, the stronger the res­ist­ance; the weak­er it is, the faster tech­no­lo­gic­al infilt­ra­tion occurs. The para­dox that pre­vents sys­tem­ic recog­ni­tion of this syn­drome is that of innov­a­tion itself. Per­ceived as inher­ently pos­it­ive, it con­ceals its ambi­val­ent poten­tial: it can both eman­cip­ate and ali­en­ate, depend­ing on the con­di­tions of its adoption.

Assessing the artificial integrity of digital systems

For arti­fi­cial intel­li­gence to enhance our human­ity without dilut­ing it, it must go bey­ond its abil­ity to mim­ic cog­ni­tion and be groun­ded in and guided by arti­fi­cial integ­rity, to respect indi­vidu­als’ men­tal, emo­tion­al and iden­tity freedoms. Tech­no­logy can alle­vi­ate pain, lim­it risk and improve lives. But no pro­gress should come at the cost of a cog­nit­ive debt that would ruin our abil­ity to think for ourselves and, with it, our rela­tion­ship with our own human­ity. Assess­ing the arti­fi­cial integ­rity of digit­al sys­tems, par­tic­u­larly those incor­por­at­ing arti­fi­cial intel­li­gence, must become a cent­ral require­ment in any digit­al trans­form­a­tion. This requires the imple­ment­a­tion of func­tion­al cog­nit­ive pro­tec­tion mech­an­isms designed to pre­vent the emer­gence, lim­it the impact or elim­in­ate func­tion­al integ­rity gaps, with a view to pre­serving the cog­nit­ive, emo­tion­al and iden­tity com­plex­ity of human beings.

#1 Functional diversion

Using tech­no­logy for pur­poses or in roles not inten­ded by the design­er or user organ­isa­tion can render the software’s usage logic and intern­al gov­ernance modes inef­fect­ive or inef­fi­cient, thereby cre­at­ing func­tion­al and rela­tion­al con­fu­sion1.

Example: A chat­bot designed to answer ques­tions about com­pany HR policy is used as a sub­sti­tute for a human hier­archy for con­flict man­age­ment or task allocation.

#2 Functional void

The absence of neces­sary steps or func­tions, because they have not been developed and are there­fore not present in the technology’s oper­at­ing logic, cre­ates a “func­tion­al void” with regard to the usage of the user2.

Example: Con­tent gen­er­a­tion tech­no­logy (such as gen­er­at­ive AI) that does not allow con­tent to be expor­ted dir­ectly in a usable format (Word, PDF, CMS) in the expec­ted qual­ity, thereby lim­it­ing or block­ing its oper­a­tion­al use.

#3 Functional security

The absence of safe­guards, human val­id­a­tion steps or inform­a­tion mes­sages when the sys­tem per­forms an action with irre­vers­ible effects that may not cor­res­pond to the user’s inten­tion3.

Example: A mar­ket­ing tech­no­logy auto­mat­ic­ally sends emails to a list of con­tacts without any mech­an­ism to block the send­ing, request user veri­fic­a­tion or gen­er­ate an inform­a­tion alert to the user in the absence of con­firm­a­tion of a cri­terion that determ­ines the safety and qual­ity of the send­ing: the cor­rect mail­ing list.

#4 Functional alienation

The cre­ation of auto­mat­ic beha­viours or con­di­tioned reflexes sim­il­ar to Pavlovi­an reflexes can reduce or des­troy the user’s abil­ity to think and judge, lead­ing to an erosion of their decision-mak­ing sov­er­eignty4.

Example: Sys­tem­at­ic accept­ance of cook­ies or blind val­id­a­tion of sys­tem alerts by cog­nit­ively fatigued users.

#5 Functional ideology

Emo­tion­al depend­ence on tech­no­logy can lead to the alter­a­tion or neut­ral­isa­tion of crit­ic­al think­ing, as well as the men­tal con­struc­tion of an ideo­logy that fuels the emer­gence of dis­course that relativ­ises, ration­al­ises or col­lect­ively denies its prop­er func­tion­ing or mal­func­tion­ing5.

Example: Jus­ti­fic­a­tion of fail­ures or errors spe­cif­ic to the func­tion­ing of tech­no­logy with argu­ments such as “It’s not the tool’s fault” or “The tool can’t guess what the user has forgotten”.

#6 Functional cultural consistency 

The anti­nomy and con­tra­dict­ory injunc­tion between the logic­al frame­work imposed or influ­enced by tech­no­logy and the val­ues or beha­vi­our­al prin­ciples pro­moted by the organ­isa­tion­al cul­ture can cre­ate ten­sions6.

Example: Tech­no­lo­gic­al work­flow that leads to the cre­ation of teams to val­id­ate and con­trol the work done by oth­ers in an organ­isa­tion that pro­motes and val­ues team empowerment.

#7 Functional transparency

If the decision-mak­ing mech­an­isms or algorithmic logic behind how the tech­no­logy works are not trans­par­ent or access­ible to the user, this may pre­vent the user from anti­cip­at­ing, over­com­ing or over­rid­ing the user­’s inten­tion7.

Example: Preselec­tion of can­did­ates by tech­no­logy that man­ages con­flicts and arbit­rates between user-defined selec­tion cri­ter­ia (exper­i­ence, qual­i­fic­a­tions, soft skills) without the weight­ing or exclu­sion rules being expli­citly vis­ible, modi­fi­able and veri­fi­able by the user.

#8 Functional addiction

The pres­ence of fea­tures based on gami­fic­a­tion, imme­di­ate grat­i­fic­a­tion or micro-reward sys­tems cal­ib­rated to hack the user’s motiv­a­tion cir­cuits can activ­ate neur­o­lo­gic­al reward mech­an­isms to stim­u­late repet­it­ive, com­puls­ive and addict­ive beha­viours, lead­ing to emo­tion­al decom­pens­a­tion and self-rein­for­cing cycles8.

Example: Noti­fic­a­tions, likes, infin­ite scroll algorithms, visu­al or audio bonuses, mile­stones reached through point mech­an­ics, badges, levels or scores to main­tain expo­nen­tial and last­ing engagement.

#9 Functional ownership

The appro­pri­ation, reuse or pro­cessing of per­son­al or intel­lec­tu­al data by a tech­no­logy, regard­less of its pub­lic access­ib­il­ity, without the informed, expli­cit and mean­ing­ful con­sent of its own­er or cre­at­or, raises eth­ic­al and leg­al ques­tions9.

Example: An AI mod­el trained on images, text or voices of indi­vidu­als found online, thereby mon­et­ising someone’s iden­tity, know­ledge or work without pri­or author­isa­tion and without any expli­cit accept­ance mech­an­ism, licence or trans­par­ent attribution.

#10 Functional bias

The inab­il­ity of a tech­no­logy to detect, mit­ig­ate or pre­vent bias or dis­crim­in­at­ory pat­terns, wheth­er in its design, train­ing data, decision-mak­ing logic or deploy­ment con­text, can res­ult in unfair treat­ment, exclu­sion or sys­tem­ic dis­tor­tion towards indi­vidu­als or groups10.

Example: A facial recog­ni­tion sys­tem that per­forms sig­ni­fic­antly less reli­ably for people with dark skin due to unbal­anced train­ing data, without func­tion­al safe­guards against bias or account­ab­il­ity mechanisms.

The cost of lack­ing arti­fi­cial integ­rity impacts many types of cap­it­al, par­tic­u­larly human capital.

Giv­en their inter­de­pend­ence with human sys­tems, the ten func­tion­al integ­rity gaps in arti­fi­cial integ­rity must be examined through a sys­tem­ic approach, encom­passing the nano (bio­lo­gic­al, neur­o­lo­gic­al), micro (indi­vidu­al, beha­vi­our­al), macro (organ­isa­tion­al, insti­tu­tion­al) and meta (cul­tur­al, ideo­lo­gic­al) levels11.

The cost asso­ci­ated with the absence of arti­fi­cial integ­rity in sys­tems, wheth­er or not they incor­por­ate arti­fi­cial intel­li­gence, impacts vari­ous types of cap­it­al: human, cul­tur­al, decision-mak­ing, repu­ta­tion­al, tech­no­lo­gic­al and fin­an­cial. This cost mani­fests itself in the destruc­tion of sus­tain­able value, fuelled by unsus­tain­able risks and an uncon­trolled increase in the cost of cap­it­al inves­ted to gen­er­ate returns (ROIC), trans­form­ing these tech­no­lo­gic­al invest­ments into struc­tur­al han­di­caps for the company’s prof­it­ab­il­ity and, con­sequently, for its long-term viab­il­ity. Com­pan­ies are not adopt­ing respons­ible digit­al trans­form­a­tion solely to meet soci­et­al expect­a­tions, but because their sus­tain­able per­form­ance depends on it and because it helps to strengthen the liv­ing fab­ric of the soci­ety that nour­ishes them and on which they depend for growth.

1Ash, J., Kit­chin, R., & Leszczyn­ski, A. (2018). Digit­al turn, digit­al geo­graph­ies? Pro­gress in Human Geo­graphy, 42(1), 25–43. https://​doi​.org/​1​0​.​1​1​7​7​/​0​3​0​9​1​3​2​5​1​6​6​64800
2Ver­beek, P.-P. (2005). What things do: Philo­soph­ic­al reflec­tions on tech­no­logy, agency, and design. Penn State Press.
3Per­row, C. (1999). Nor­mal acci­dents: Liv­ing with high-risk tech­no­lo­gies. Prin­ceton Uni­ver­sity Press.
4Gray, C. M., Kou, Y., Battles, B., Hog­gatt, J., & Toombs, A. L. (2018). The dark (pat­terns) side of UX design. In Pro­ceed­ings of the 2018 CHI Con­fer­ence on Human Factors in Com­put­ing Sys­tems (CHI ‘18) (Paper No. 534, pp. 1–14). Asso­ci­ation for Com­put­ing Machinery. https://​doi​.org/​1​0​.​1​1​4​5​/​3​1​7​3​5​7​4​.​3​1​74108
5Flor­idi, L., Cowls, J., Beltrametti, M., Chat­ila, R., Chazer­and, P., Dignum, V., … & Vay­ena, E. (2018). AI4People—An eth­ic­al frame­work for a good AI soci­ety: Oppor­tun­it­ies, risks, prin­ciples, and recom­mend­a­tions. Minds and Machines, 28(4), 689–707.
https://doi.org/10.1007/s11023-018‑9482‑5
6Ge, X., Xu, C., Misaki, D., Markus, H. R., & Tsai, J. L. (2024). How cul­ture shapes what people want from AI. Stan­ford SPARQ. https://​sparq​.stan​ford​.edu/​s​i​t​e​s​/​g​/​f​i​l​e​s​/​s​b​i​y​b​j​1​9​0​2​1​/​f​i​l​e​s​/​m​e​d​i​a​/​f​i​l​e​/​c​u​l​t​u​r​e​-​a​i.pdf
7Gutiér­rez, J. D. (2025, April 9). Why does algorithmic trans­par­ency mat­ter and what can we do about it? Open Glob­al Rights.
https://​www​.open​g​lob​al​rights​.org/​w​h​y​-​d​o​e​s​-​a​l​g​o​r​i​t​h​m​i​c​-​t​r​a​n​s​p​a​r​e​n​c​y​-​m​a​t​t​e​r​-​a​n​d​-​w​h​a​t​-​c​a​n​-​w​e​-​d​o​-​a​b​o​u​t-it/
8Yin, C., Wa, A., Zhang, Y., Huang, R., & Zheng, J. (2025). Explor­ing the dark pat­terns in user exper­i­ence design for short-form videos. In N. A. Streitz & S. Konomi (Eds.), Dis­trib­uted, ambi­ent and per­vas­ive inter­ac­tions (Lec­ture Notes in Com­puter Sci­ence, Vol. 15802, pp. (pp. 330–347). Spring­er.
https://doi.org/10.1007/978–3‑031–92977-9_21
9Flor­idi, L., Cowls, J., Beltrametti, M., Chat­ila, R., Chazer­and, P., Dignum, V., … & Vay­ena, E. (2018). AI4People—An eth­ic­al frame­work for a good AI soci­ety: Oppor­tun­it­ies, risks, prin­ciples, and recom­mend­a­tions. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018‑9482‑5
10
Nation­al Insti­tute of Stand­ards and Tech­no­logy. (2021). NIST Spe­cial Pub­lic­a­tion 1270: Towards a stand­ard for identi­fy­ing and man­aging bias in arti­fi­cial intel­li­gence.
https://​nvlpubs​.nist​.gov/​n​i​s​t​p​u​b​s​/​S​p​e​c​i​a​l​P​u​b​l​i​c​a​t​i​o​n​s​/​N​I​S​T​.​S​P​.​1​2​7​0.pdf
11
H. (2024). Arti­fi­cial integ­rity: The paths to lead­ing AI toward a human-centered future. Wiley

Support accurate information rooted in the scientific method.

Donate