ai robot sentenced in court in orange jumpsuit, chained and seated in a prison setting. The background includes guards, highlighting themes of artificial intelligence, control, and dystopian scenarios
π Digital π Society
How can artificial intelligence be regulated?

Artificial general intelligence: how will it be regulated?

with Jean Langlois-Berthelot, Doctor of Applied Mathematics and Head of Division in the French Army and Christophe Gaie, Head of the Engineering and Digital Innovation Division at the Prime Minister's Office
On October 2nd, 2024 |
5 min reading time
Jean LANGLOIS-BERTHELOT
Jean Langlois-Berthelot
Doctor of Applied Mathematics and Head of Division in the French Army
Christophe Gaie
Christophe Gaie
Head of the Engineering and Digital Innovation Division at the Prime Minister's Office
Key takeaways
  • Current artificial intelligence (AI) excels at specific tasks but remains different from artificial general intelligence (AGI), which aims for intelligence comparable to that of humans.
  • Current AI models, while sophisticated, are not autonomous and have significant limitations that differentiate them from AGI.
  • Fears about AGI are growing; some experts are concerned that it could supplant humanity, while others consider this prospect to still be a long way off.
  • Rational regulation of AGI requires an informed analysis of the issues at stake and a balance between preventing risks and promoting benefits.
  • Proposals for effective regulation of AGI include national licences, rigorous safety tests and enhanced international cooperation.

Arti­fi­cial Intel­li­gence (AI) is cur­rently boom­ing and trans­form­ing many aspects of our daily lives. It optim­ises the oper­a­tion of search engines and makes it pos­sible to ana­lyse quer­ies more effect­ively in order to pro­pose the most rel­ev­ant res­ults1. It improves sur­veil­lance sys­tems, which now use it to detect sus­pi­cious beha­viour2. It offers invalu­able assist­ance in the health­care sec­tor for ana­lys­ing med­ic­al images, devel­op­ing new drugs and per­son­al­ising treat­ments3. How­ever, there is a fun­da­ment­al dis­tinc­tion between the AI we know today, often referred to as “clas­sic­al AI”, and a more ambi­tious concept: Arti­fi­cial Gen­er­al Intel­li­gence (AGI).

Clas­sic­al AI is designed to excel at spe­cif­ic tasks and can out­per­form the best experts or spe­cial­ised algorithms. AGI, on the oth­er hand, aspires to an intel­li­gence com­par­able to that of a human being. It aims to under­stand the world in all its com­plex­ity, to learn autonom­ously and to adapt to new situ­ations. In oth­er words, AGI would be cap­able of solv­ing a wide vari­ety of prob­lems, reas­on­ing, cre­at­ing and being self-aware4.

Growing alarmism about AI

Warn­ings about the rise of gen­er­al-pur­pose AI are mul­tiply­ing, point­ing to a bleak future for our civil­isa­tion. Sev­er­al lead­ing fig­ures in the world of tech­no­logy have warned of the harm­ful effects of this tech­no­logy. Steph­en Hawk­ing has expressed fears that AI could sup­plant humans, lead­ing to a new era in which machines could dom­in­ate5. Emin­ent Amer­ic­an pro­fess­ors, such as Stu­art Rus­sell, Pro­fess­or at the Uni­ver­sity of Cali­for­nia, Berke­ley, have also high­lighted the shift towards a uni­verse where AI will play a role that is unknown at this stage, with new risks to be taken into account and anti­cip­ated6. Fur­ther­more, Jerome Glenn of the Mil­len­ni­um Pro­ject has stated7 that “gov­ern­ing AGI could be the most com­plex man­age­ment prob­lem human­ity has ever faced” and that “the slight­est mis­take could wipe us off the face of the Earth.” These asser­tions sug­gest an extremely pess­im­ist­ic, even cata­stroph­ic, out­look on the devel­op­ment of the AGI.

Is AGI really imminent?

A fun­da­ment­al cri­ti­cism of the immin­ence of AGI is based on the “prob­lem of the com­plex­ity of val­ues”, a key concept addressed by Nick Bostrom in Super­in­tel­li­gence: Paths, Dangers, Strategies8. The evol­u­tion­ary pro­cess of human life and civil­isa­tion spans bil­lions of years, with the devel­op­ment of numer­ous com­plex sys­tems of feel­ings, but also of con­trols and val­ues, thanks to the many and var­ied inter­ac­tions with an envir­on­ment that is phys­ic­al, bio­lo­gic­al, and social. From this per­spect­ive, it is hypo­thes­ised that an autonom­ous and highly soph­ist­ic­ated AGI can­not be achieved in just a few decades.

The Aus­trali­an Rod­ney Brooks, one of the icons and pion­eers of robot­ics and the­or­ies of “embod­ied cog­ni­tion”, main­tains that what will determ­ine wheth­er an intel­li­gence is truly autonom­ous and soph­ist­ic­ated is its integ­ra­tion with­in a body and con­tinu­ous inter­ac­tion with a com­plex envir­on­ment over a suf­fi­ciently long peri­od9. These ele­ments rein­force the thes­is that AGI, as described in the alarm­ist scen­ari­os, is still a long way from becom­ing a reality.

In what way is current AI not yet general AI?

Recent years have seen the rise of large lan­guage mod­els (LLMs) such as Chat­G­PT, Gem­ini, Copi­lot and so on. These have demon­strated an impress­ive abil­ity to assim­il­ate many impli­cit human val­ues, based on massive ana­lys­is of writ­ten doc­u­ments. Because of its archi­tec­ture and the way it works, Chat­G­PT has a num­ber of lim­it­a­tions10. It does not sup­port logic­al reas­on­ing, its responses are some­times unre­li­able, its know­ledge base is not adap­ted in real-time, and it is sus­cept­ible to “prompt injec­tion” attacks. Although these mod­els have soph­ist­ic­ated value sys­tems, they do not appear to be autonom­ous. In fact, they do not seem to aim for autonomy or self-pre­ser­va­tion with­in an envir­on­ment that is both com­plex and vari­able. In this respect, it is import­ant to remem­ber that a very import­ant part of com­mu­nic­a­tion is linked to inton­a­tion and body lan­guage11, ele­ments that are not at all con­sidered in inter­ac­tions with gen­er­at­ive AIs.

A simple remind­er of this (pro­found) dis­tinc­tion seems cru­cial to bet­ter under­stand the extent to which con­cerns over mali­cious super­in­tel­li­gence are unfoun­ded and excess­ive. Today, LLMs can only be con­sidered as par­rots provid­ing prob­ab­il­ist­ic answers (“stochast­ic par­rots” accord­ing to Emily Bend­er12). Of course, they rep­res­ent a break with the past, and it appears neces­sary to reg­u­late their use now.

What are the arguments for an omnibenevolent superintelligence?

It seems to us that future intel­li­gence can­not be “arti­fi­cial” in the strict sense of the word, i.e. designed from scratch. But it would be highly col­lab­or­at­ive, emer­ging from the know­ledge (and even wis­dom) accu­mu­lated by human­kind. It is real­ist­ic to con­sider that cur­rent AIs, as such, are largely tools and embod­i­ments of col­lect­ive thought pat­terns, tend­ing towards bene­vol­ence rather than con­trol or dom­in­a­tion. This col­lect­ive intel­li­gence is noth­ing less than a deep memory that is nour­ished by civ­il­ised val­ues such as help­ing those in need, respect for the envir­on­ment and respect for oth­ers. We there­fore need to pro­tect this intan­gible her­it­age and ensure that it is aimed at provid­ing sup­port and help to human beings rather than trans­mit­ting mis­in­form­a­tion or incit­ing them to com­mit rep­re­hens­ible acts. At the risk of being Manichean, LLMs can be used for good13, but they can also be used for evil14.

What evidence is there to refute the scenarios of domination and control by AGI?

From a logic­al point of view, alarm­ist scen­ari­os in which mali­cious act­ors would be led, in the short term, to pro­gramme mani­festly harm­ful object­ives into the heart of AI appear a pri­ori to be exag­ger­ated. The argu­ment of the com­plex­ity of val­ues sug­gests that these neg­at­ive val­ues would be poorly integ­rated into the mass of pos­it­ive val­ues learned. Fur­ther­more, it seems likely that well-inten­tioned pro­gram­mers (white hats) will cre­ate AIs that can counter the destruct­ive strategies of mali­cious AIs (black hats). This could lead, quite nat­ur­ally, to a clas­sic “arms race”. Anoth­er counter-argu­ment to a mali­cious takeover of AIs is their eco­nom­ic poten­tial. At present, AI for the gen­er­al pub­lic is being driv­en by major play­ers in the eco­nom­ic sec­tor (OpenAI, Google, Microsoft, etc.), at least some of whom have a profit rationale. This requires user con­fid­ence in the use of the AI made avail­able, but also the pre­ser­va­tion of the data and algorithms that make up AI as an intan­gible asset at the heart of eco­nom­ic activ­ity. The resources required for pro­tec­tion and cyber-defence will there­fore be considerable.

Proposals for better governance of AI

Ini­ti­at­ives have already been taken to reg­u­late spe­cial­ised AI. How­ever, the reg­u­la­tion of arti­fi­cial gen­er­al intel­li­gence will require spe­cif­ic meas­ures. One such ini­ti­at­ive is the AI Act cur­rently being draf­ted by the European Uni­on15. The authors make the fol­low­ing addi­tion­al proposals:

  • The intro­duc­tion of a sys­tem of nation­al licences to ensure that any new AGI com­plies with the neces­sary safety standards,
  • Sys­tems for veri­fy­ing the safety of AI in con­trolled envir­on­ments before they are author­ised and deployed,
  • The devel­op­ment of more advanced inter­na­tion­al cooper­a­tion, which could lead to UN Gen­er­al Assembly res­ol­u­tions and the estab­lish­ment of con­ven­tions on AI.

Ration­al reg­u­la­tion of AI requires an informed ana­lys­is of the issues at stake and a bal­ance between pre­vent­ing risks and pro­mot­ing bene­fits. Inter­na­tion­al insti­tu­tions and tech­nic­al experts will play an import­ant role in coordin­at­ing the efforts required for the safe and eth­ic­al devel­op­ment of AI. Good gov­ernance and effect­ive reg­u­la­tion of the AGI will require a dis­pas­sion­ate approach.

1Vijaya, P., Raju, G. & Ray, S.K. Arti­fi­cial neur­al net­work-based mer­ging score for Meta search engine. J. Cent. South Univ. 23, 2604–2615 (2016). https://doi.org/10.1007/s11771-016‑3322‑7
2Li, Jh. Cyber secur­ity meets arti­fi­cial intel­li­gence: a sur­vey. Fron­ti­ers Inf Tech­n­ol Elec­tron­ic Eng 19, 1462–1474 (2018). https://​doi​.org/​1​0​.​1​6​3​1​/​F​I​T​E​E​.​1​8​00573
3Jiang F, Jiang Y, Zhi H, et al Arti­fi­cial intel­li­gence in health­care: past, present and future Stroke and Vas­cu­lar Neur­o­logy 2017;2: https://doi.org/10.1136/svn-2017–000101
4Ng, Gee Wah, and Wang Chi Leung. “Strong Arti­fi­cial Intel­li­gence and Con­scious­ness.” Journ­al of Arti­fi­cial Intel­li­gence and Con­scious­ness 07, no. 01 (March 1, 2020): 63–72. https://​doi​.org/​1​0​.​1​1​4​2​/​s​2​7​0​5​0​7​8​5​2​0​3​00042
5Kharpal, Arjun. “Steph­en Hawk­ing says A.I. could be ‘worst event in the his­tory of our civil­iz­a­tion.’” CNBC, Novem­ber 6, 2017. https://​www​.cnbc​.com/​2​0​1​7​/​1​1​/​0​6​/​s​t​e​p​h​e​n​-​h​a​w​k​i​n​g​-​a​i​-​c​o​u​l​d​-​b​e​-​w​o​r​s​t​-​e​v​e​n​t​-​i​n​-​c​i​v​i​l​i​z​a​t​i​o​n​.html
6Chia Jes­sica, Cian­ci­olo Beth­any, “Opin­ion: We’ve reached a turn­ing point with AI, expert says” Septem­ber 5, 2023, https://​edi​tion​.cnn​.com/​2​0​2​3​/​0​5​/​3​1​/​o​p​i​n​i​o​n​s​/​a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​s​t​u​a​r​t​-​r​u​s​s​e​l​l​/​i​n​d​e​x​.html
7Jerome C. Glenn, Feb­ru­ary 2023, “Arti­fi­cial Gen­er­al Intel­li­gence Issues and Oppor­tun­it­ies”, The Mil­leni­um Pro­ject, Foresight for the 2nd Stra­tegic Plan of Hori­zon Europe (2025–27) https://​www​.mil​len​ni​um​-pro​ject​.org/​w​p​-​c​o​n​t​e​n​t​/​u​p​l​o​a​d​s​/​2​0​2​3​/​0​5​/​E​C​-​A​G​I​-​p​a​p​e​r.pdf
8Nick Bostrom. 2014. Super­in­tel­li­gence: Paths, Dangers, Strategies, 1st edi­tion. Oxford Uni­ver­sity Press, Inc., USA.
9Brooks, R. A. (1991). Intel­li­gence Without Rep­res­ent­a­tion. Arti­fi­cial Intel­li­gence, 47(1–3), 139–159.
10
11Quinn, Jay­me, and Jay­me Quinn. “How Much of Com­mu­nic­a­tion Is Non­verbal? | UT Per­mi­an Basin Online.” The Uni­ver­sity of Texas Per­mi­an Basin | UTPB, May 15, 2023. https://​online​.utpb​.edu/​a​b​o​u​t​-​u​s​/​a​r​t​i​c​l​e​s​/​c​o​m​m​u​n​i​c​a​t​i​o​n​/​h​o​w​-​m​u​c​h​-​o​f​-​c​o​m​m​u​n​i​c​a​t​i​o​n​-​i​s​-​n​o​n​v​e​rbal/
12Emily M. Bend­er, Tim­nit Gebru, Angelina McMil­lan-Major, and Shmar­garet Shmitchell. 2021. On the Dangers of Stochast­ic Par­rots: Can Lan­guage Mod­els Be Too Big? In Pro­ceed­ings of the 2021 ACM Con­fer­ence on Fair­ness, Account­ab­il­ity, and Trans­par­ency (FAccT ‘21). Asso­ci­ation for Com­put­ing Machinery, New York, NY, USA, 610–623. https://​doi​.org/​1​0​.​1​1​4​5​/​3​4​4​2​1​8​8​.​3​4​45922
13Javaid, Mohd, Abid Haleem, and Ravi Pra­tap Singh. “Chat­G­PT for health­care ser­vices: An emer­ging stage for an innov­at­ive per­spect­ive.” Bench­Coun­cil Trans­ac­tions on Bench­marks, Stand­ards and Eval­u­ations 3, no. 1 (2023): 100105. https://​doi​.org/​1​0​.​1​0​1​6​/​j​.​t​b​e​n​c​h​.​2​0​2​3​.​1​00105
14Lohmann, S. (2024). Chat­G­PT, Arti­fi­cial Intel­li­gence, and the Ter­ror­ist Tool­box. An Amer­ic­an Per­spect­ive, 23. https://​media​.defense​.gov/​2​0​2​4​/​A​p​r​/​1​8​/​2​0​0​3​4​4​4​2​2​8​/​-​1​/​-​1​/​0​/​2​0​2​4​0​5​0​6​_​S​i​m​-​H​a​r​t​u​n​i​a​n​-​M​i​l​a​s​_​E​m​e​r​g​i​n​g​T​e​c​h​_​F​i​n​a​l​.​P​D​F​#​p​a​ge=41
15“Lay­ing Down Har­mon­ised Rules On Arti­fi­cial Intel­li­gence (Arti­fi­cial Intel­li­gence Act) And Amend­ing Cer­tain Uni­on Legis­lat­ive Acts,” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

Support accurate information rooted in the scientific method.

Donate