AI : how to protect ourselves from technological Stockholm syndrome
- Digital technologies can be a threat to individual autonomy and free will, to the point of making people forget that they have become alienated.
- Replacing machines perceived as a source of aggression with machines perceived as a source of comfort is similar to a technological “Stockholm syndrome”.
- While digital innovation is perceived as inherently positive, it can nevertheless be both emancipatory and alienating, depending on the conditions under which it is adopted.
- To remain focused on our humanity, artificial intelligence must be based on a form of simulated artificial integrity, built with reference to human values.
- Artificial integrity relies on the ability to prevent and limit functional integrity gaps, which is a prerequisite for ensuring that the benefits of digital technologies are not built at the expense of humans.
The adoption of digital technologies cannot be reduced to a simple rational decision or a functional evolution of practices and uses. It detaches individuals, gradually or otherwise, from their initial frames of reference and habitual structures, immersing them in environments governed by external logics imposed by the technology itself. This shift represents a profound reconfiguration of individuals’ cognitive, social and behavioural structures, under the influence of algorithmic and prescriptive logics that supplant their own frames of reference. This process of technological transition, far from being neutral, is akin to a form of symbolic captivity in which individuals, confronted with the violence of change, activate psychological defence mechanisms in response to what they perceive as an attack on their autonomy, free will and identity integrity.
When adoption is deemed successful, it means that the initial defence structures have given way: the user has not only integrated the rules imposed by technology but has developed a form of emotional identification with it, reinterpreting the origin of the constraint as a chosen relationship. At this stage, a new normal is established. This shift marks the replacement of the old frame of reference with that of the machine, which is now perceived as familiar and reassuring. The initial aggression is repressed, and the new cognitive automatisms become objects of defence.
These stimuli activate emotional confirmation bias and transform coercion into perceived benevolence.
This phenomenon, which can be likened to “Stockholm syndrome” in the relationship between humans and machines, involves a dislocation of cognitive references, followed by an emotional reconfiguration in which the victim comes to protect their technological aggressor. The cognitive enslavement produced in this way is not a side effect, it’s a survival mechanism, fuelled by the brain’s attempts to reduce the stress generated by the intrusion of a foreign thought framework. This emotional rewriting ensures a form of internal coherence in the face of technological alienation. The user’s attention is then diverted from the initial violence and focused on the positive signals emitted by the machine: social validation, algorithmic gratification, playful rewards. These stimuli activate emotional confirmation bias and transform coercion into perceived benevolence.
Increased risk of technological dependency
Through a process of neural plasticity, brain circuits reorganise our perception of our relationship with machines: what was once stressful becomes normal; what was once domination becomes support; and what was once an aggressor becomes a companion. A reversal of the power structure is taking place through the reconfiguration of the nucleus accumbens and the prefrontal cortex, anchoring a new coercive emotional relationship, a phenomenon increasingly recognised within the computer science community as a form of “computational agency”, in which software actively reconfigures perception, behaviour and emotional judgement. This phenomenon represents one of the fundamental dangers that artificial intelligence poses to humanity: the normalisation of mental dependence as a vector of social acceptability. This is why it is not enough to design artificially intelligent systems: it is imperative to equip them with artificial integrity, which guarantees human cognitive sovereignty.

Some argue that digital technology contributes to the empowerment of vulnerable individuals. This argument masks a more disturbing reality: technological dependence is often presented as regained autonomy, when in fact it is based on the prior collapse of mechanisms of identity self-defence. Even when technology aims to restore relative autonomy, the process of cognitive imposition remains active, facilitated by weak defence mechanisms. Users, lacking resistance, adhere all the more quickly and deeply to the framework imposed by the machine. Whatever the case, technology shapes a new cognitive environment. The only difference is the degree of integrity of the pre-existing mental framework: the stronger the framework, the stronger the resistance; the weaker it is, the faster technological infiltration occurs. The paradox that prevents systemic recognition of this syndrome is that of innovation itself. Perceived as inherently positive, it conceals its ambivalent potential: it can both emancipate and alienate, depending on the conditions of its adoption.
Assessing the artificial integrity of digital systems
For artificial intelligence to enhance our humanity without diluting it, it must go beyond its ability to mimic cognition and be grounded in and guided by artificial integrity, to respect individuals’ mental, emotional and identity freedoms. Technology can alleviate pain, limit risk and improve lives. But no progress should come at the cost of a cognitive debt that would ruin our ability to think for ourselves and, with it, our relationship with our own humanity. Assessing the artificial integrity of digital systems, particularly those incorporating artificial intelligence, must become a central requirement in any digital transformation. This requires the implementation of functional cognitive protection mechanisms designed to prevent the emergence, limit the impact or eliminate functional integrity gaps, with a view to preserving the cognitive, emotional and identity complexity of human beings.
#1 Functional diversion
Using technology for purposes or in roles not intended by the designer or user organisation can render the software’s usage logic and internal governance modes ineffective or inefficient, thereby creating functional and relational confusion1.
Example: A chatbot designed to answer questions about company HR policy is used as a substitute for a human hierarchy for conflict management or task allocation.
#2 Functional void
The absence of necessary steps or functions, because they have not been developed and are therefore not present in the technology’s operating logic, creates a “functional void” with regard to the usage of the user2.
Example: Content generation technology (such as generative AI) that does not allow content to be exported directly in a usable format (Word, PDF, CMS) in the expected quality, thereby limiting or blocking its operational use.
#3 Functional security
The absence of safeguards, human validation steps or information messages when the system performs an action with irreversible effects that may not correspond to the user’s intention3.
Example: A marketing technology automatically sends emails to a list of contacts without any mechanism to block the sending, request user verification or generate an information alert to the user in the absence of confirmation of a criterion that determines the safety and quality of the sending: the correct mailing list.
#4 Functional alienation
The creation of automatic behaviours or conditioned reflexes similar to Pavlovian reflexes can reduce or destroy the user’s ability to think and judge, leading to an erosion of their decision-making sovereignty4.
Example: Systematic acceptance of cookies or blind validation of system alerts by cognitively fatigued users.
#5 Functional ideology
Emotional dependence on technology can lead to the alteration or neutralisation of critical thinking, as well as the mental construction of an ideology that fuels the emergence of discourse that relativises, rationalises or collectively denies its proper functioning or malfunctioning5.
Example: Justification of failures or errors specific to the functioning of technology with arguments such as “It’s not the tool’s fault” or “The tool can’t guess what the user has forgotten”.
#6 Functional cultural consistency
The antinomy and contradictory injunction between the logical framework imposed or influenced by technology and the values or behavioural principles promoted by the organisational culture can create tensions6.
Example: Technological workflow that leads to the creation of teams to validate and control the work done by others in an organisation that promotes and values team empowerment.
#7 Functional transparency
If the decision-making mechanisms or algorithmic logic behind how the technology works are not transparent or accessible to the user, this may prevent the user from anticipating, overcoming or overriding the user’s intention7.
Example: Preselection of candidates by technology that manages conflicts and arbitrates between user-defined selection criteria (experience, qualifications, soft skills) without the weighting or exclusion rules being explicitly visible, modifiable and verifiable by the user.
#8 Functional addiction
The presence of features based on gamification, immediate gratification or micro-reward systems calibrated to hack the user’s motivation circuits can activate neurological reward mechanisms to stimulate repetitive, compulsive and addictive behaviours, leading to emotional decompensation and self-reinforcing cycles8.
Example: Notifications, likes, infinite scroll algorithms, visual or audio bonuses, milestones reached through point mechanics, badges, levels or scores to maintain exponential and lasting engagement.
#9 Functional ownership
The appropriation, reuse or processing of personal or intellectual data by a technology, regardless of its public accessibility, without the informed, explicit and meaningful consent of its owner or creator, raises ethical and legal questions9.
Example: An AI model trained on images, text or voices of individuals found online, thereby monetising someone’s identity, knowledge or work without prior authorisation and without any explicit acceptance mechanism, licence or transparent attribution.
#10 Functional bias
The inability of a technology to detect, mitigate or prevent bias or discriminatory patterns, whether in its design, training data, decision-making logic or deployment context, can result in unfair treatment, exclusion or systemic distortion towards individuals or groups10.
Example: A facial recognition system that performs significantly less reliably for people with dark skin due to unbalanced training data, without functional safeguards against bias or accountability mechanisms.
The cost of lacking artificial integrity impacts many types of capital, particularly human capital.
Given their interdependence with human systems, the ten functional integrity gaps in artificial integrity must be examined through a systemic approach, encompassing the nano (biological, neurological), micro (individual, behavioural), macro (organisational, institutional) and meta (cultural, ideological) levels11.
The cost associated with the absence of artificial integrity in systems, whether or not they incorporate artificial intelligence, impacts various types of capital: human, cultural, decision-making, reputational, technological and financial. This cost manifests itself in the destruction of sustainable value, fuelled by unsustainable risks and an uncontrolled increase in the cost of capital invested to generate returns (ROIC), transforming these technological investments into structural handicaps for the company’s profitability and, consequently, for its long-term viability. Companies are not adopting responsible digital transformation solely to meet societal expectations, but because their sustainable performance depends on it and because it helps to strengthen the living fabric of the society that nourishes them and on which they depend for growth.
https://doi.org/10.1007/s11023-018‑9482‑5↑
↑
https://www.openglobalrights.org/why-does-algorithmic-transparency-matter-and-what-can-we-do-about-it/↑
https://doi.org/10.1007/978–3‑031–92977-9_21↑
National Institute of Standards and Technology. (2021). NIST Special Publication 1270: Towards a standard for identifying and managing bias in artificial intelligence. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf↑