Artificial Intelligence (AI) is currently booming and transforming many aspects of our daily lives. It optimises the operation of search engines and makes it possible to analyse queries more effectively in order to propose the most relevant results1. It improves surveillance systems, which now use it to detect suspicious behaviour2. It offers invaluable assistance in the healthcare sector for analysing medical images, developing new drugs and personalising treatments3. However, there is a fundamental distinction between the AI we know today, often referred to as “classical AI”, and a more ambitious concept: Artificial General Intelligence (AGI).
Classical AI is designed to excel at specific tasks and can outperform the best experts or specialised algorithms. AGI, on the other hand, aspires to an intelligence comparable to that of a human being. It aims to understand the world in all its complexity, to learn autonomously and to adapt to new situations. In other words, AGI would be capable of solving a wide variety of problems, reasoning, creating and being self-aware4.
Growing alarmism about AI
Warnings about the rise of general-purpose AI are multiplying, pointing to a bleak future for our civilisation. Several leading figures in the world of technology have warned of the harmful effects of this technology. Stephen Hawking has expressed fears that AI could supplant humans, leading to a new era in which machines could dominate5. Eminent American professors, such as Stuart Russell, Professor at the University of California, Berkeley, have also highlighted the shift towards a universe where AI will play a role that is unknown at this stage, with new risks to be taken into account and anticipated6. Furthermore, Jerome Glenn of the Millennium Project has stated7 that “governing AGI could be the most complex management problem humanity has ever faced” and that “the slightest mistake could wipe us off the face of the Earth.” These assertions suggest an extremely pessimistic, even catastrophic, outlook on the development of the AGI.
Is AGI really imminent?
A fundamental criticism of the imminence of AGI is based on the “problem of the complexity of values”, a key concept addressed by Nick Bostrom in Superintelligence: Paths, Dangers, Strategies8. The evolutionary process of human life and civilisation spans billions of years, with the development of numerous complex systems of feelings, but also of controls and values, thanks to the many and varied interactions with an environment that is physical, biological, and social. From this perspective, it is hypothesised that an autonomous and highly sophisticated AGI cannot be achieved in just a few decades.
The Australian Rodney Brooks, one of the icons and pioneers of robotics and theories of “embodied cognition”, maintains that what will determine whether an intelligence is truly autonomous and sophisticated is its integration within a body and continuous interaction with a complex environment over a sufficiently long period9. These elements reinforce the thesis that AGI, as described in the alarmist scenarios, is still a long way from becoming a reality.
In what way is current AI not yet general AI?
Recent years have seen the rise of large language models (LLMs) such as ChatGPT, Gemini, Copilot and so on. These have demonstrated an impressive ability to assimilate many implicit human values, based on massive analysis of written documents. Because of its architecture and the way it works, ChatGPT has a number of limitations10. It does not support logical reasoning, its responses are sometimes unreliable, its knowledge base is not adapted in real-time, and it is susceptible to “prompt injection” attacks. Although these models have sophisticated value systems, they do not appear to be autonomous. In fact, they do not seem to aim for autonomy or self-preservation within an environment that is both complex and variable. In this respect, it is important to remember that a very important part of communication is linked to intonation and body language11, elements that are not at all considered in interactions with generative AIs.
A simple reminder of this (profound) distinction seems crucial to better understand the extent to which concerns over malicious superintelligence are unfounded and excessive. Today, LLMs can only be considered as parrots providing probabilistic answers (“stochastic parrots” according to Emily Bender12). Of course, they represent a break with the past, and it appears necessary to regulate their use now.
What are the arguments for an omnibenevolent superintelligence?
It seems to us that future intelligence cannot be “artificial” in the strict sense of the word, i.e. designed from scratch. But it would be highly collaborative, emerging from the knowledge (and even wisdom) accumulated by humankind. It is realistic to consider that current AIs, as such, are largely tools and embodiments of collective thought patterns, tending towards benevolence rather than control or domination. This collective intelligence is nothing less than a deep memory that is nourished by civilised values such as helping those in need, respect for the environment and respect for others. We therefore need to protect this intangible heritage and ensure that it is aimed at providing support and help to human beings rather than transmitting misinformation or inciting them to commit reprehensible acts. At the risk of being Manichean, LLMs can be used for good13, but they can also be used for evil14.
What evidence is there to refute the scenarios of domination and control by AGI?
From a logical point of view, alarmist scenarios in which malicious actors would be led, in the short term, to programme manifestly harmful objectives into the heart of AI appear a priori to be exaggerated. The argument of the complexity of values suggests that these negative values would be poorly integrated into the mass of positive values learned. Furthermore, it seems likely that well-intentioned programmers (white hats) will create AIs that can counter the destructive strategies of malicious AIs (black hats). This could lead, quite naturally, to a classic “arms race”. Another counter-argument to a malicious takeover of AIs is their economic potential. At present, AI for the general public is being driven by major players in the economic sector (OpenAI, Google, Microsoft, etc.), at least some of whom have a profit rationale. This requires user confidence in the use of the AI made available, but also the preservation of the data and algorithms that make up AI as an intangible asset at the heart of economic activity. The resources required for protection and cyber-defence will therefore be considerable.
Proposals for better governance of AI
Initiatives have already been taken to regulate specialised AI. However, the regulation of artificial general intelligence will require specific measures. One such initiative is the AI Act currently being drafted by the European Union15. The authors make the following additional proposals:
- The introduction of a system of national licences to ensure that any new AGI complies with the necessary safety standards,
- Systems for verifying the safety of AI in controlled environments before they are authorised and deployed,
- The development of more advanced international cooperation, which could lead to UN General Assembly resolutions and the establishment of conventions on AI.
Rational regulation of AI requires an informed analysis of the issues at stake and a balance between preventing risks and promoting benefits. International institutions and technical experts will play an important role in coordinating the efforts required for the safe and ethical development of AI. Good governance and effective regulation of the AGI will require a dispassionate approach.