IA IH foncée
π Digital π Science and technology
What are the next challenges for AI?

Are artificial intelligence and human intelligence comparable?

with Daniel Andler, Professor emeritus in Philosophy of Science at Sorbonne Université, Maxime Amblard, Professor of Computer Science at Université de Lorraine and Annabelle Blangero, PhD in Neuroscience and Data Science Manager at Ekimetrics
On January 17th, 2024 |
5 min reading time
Daniel Andler
Daniel Andler
Professor emeritus in Philosophy of Science at Sorbonne Université
Maxime Amblard
Maxime Amblard
Professor of Computer Science at Université de Lorraine
Avatar
Annabelle Blangero
PhD in Neuroscience and Data Science Manager at Ekimetrics
Key takeaways
  • Artificial intelligence and human intelligence are inevitably compared.
  • This confrontation is intrinsic to the history of AI: some approaches are inspired by human cognition, while others are completely independent from it.
  • The imprecise and controversial definition of intelligence makes this comparison vague.
  • Consciousness remains one of the main elements that AI seems to lack in order to imitate human intelligence.
  • The question of comparison in fact raises ethical issues about the use, purpose and regulation of AI.

Arti­fi­cial intel­li­gence (AI) is dis­rupt­ing the world as we know it. It is per­meat­ing every part of our lives, with more or less desir­able and ambi­tious goals. Inev­it­ably, AI and human intel­li­gence (HI) are being com­pared. Far from com­ing out of nowhere, this con­front­a­tion can be explained by his­tor­ic­al dynam­ics inscribed deep with­in the AI project.

A long-standing comparison

AI and HI, as fields of study, have co-evolved. There have been two dis­tinct approaches since the early days of mod­ern com­put­ing: evol­u­tion by par­al­lel­ism or by dis­reg­ard. “The founders of AI were divided into two approaches. On the one hand, those who wanted to ana­lyse human men­tal pro­cesses and repro­duce them on a com­puter, in a mir­ror image, so that the two under­tak­ings would feed off each oth­er. On the oth­er, those who saw HI as a lim­it­a­tion rather than an inspir­a­tion. This trend was inter­ested in prob­lem solv­ing, in oth­er words in the res­ult and not the pro­cess”, recalls Daniel Andler.

Our tend­ency to com­pare AI and HI in numer­ous pub­lic­a­tions is there­fore not a cur­rent trend, but part of the his­tory of AI. What is symp­to­mat­ic of our times is the tend­ency to equate the entire digit­al world with AI: “Today, all com­put­ing is described as AI. You have to go back to the found­a­tions of the dis­cip­line to under­stand that AI is a spe­cif­ic tool, defined by the cal­cu­la­tion that is being made and the nature of the task it is solv­ing. If the task seems to involve human skills, we will be look­ing at the capa­city for intel­li­gence. That, in essence, is what AI is all about”, explains Maxime Amblard.

Two branches of the same tree

The two major trends men­tioned above have giv­en rise to two major cat­egor­ies of AI:

  • sym­bol­ic AI, based on logic­al infer­ence rules, which has little to do with human cognition
  • con­nec­tion­ist AI, based on neur­al net­works, which is inspired by human cognition.

Maxime Amblard takes us back to the con­text of the time: “In the middle of the 20th cen­tury, the com­put­ing capa­city of com­puters was tiny com­pared with today. So, we thought that to have intel­li­gent sys­tems, the cal­cu­la­tion would have to con­tain expert inform­a­tion that we had pre­vi­ously encoded in the form of rules and sym­bols. At the same time, oth­er research­ers were more inter­ested in how expert­ise could be gen­er­ated. The ques­tion then became: how can we con­struct a prob­ab­il­ity dis­tri­bu­tion that will provide a good explan­a­tion of how the world works? It’s easy to see why these approaches exploded when the avail­ab­il­ity of data and memory and com­put­ing capa­city increased radically”.

To illus­trate the his­tor­ic­al devel­op­ment of these two branches, Maxime Amblard uses the meta­phor of two skis advan­cing one after the oth­er: “Before com­put­ing power became avail­able, prob­ab­il­ist­ic mod­els were ignored in favour of sym­bol­ic mod­els. We are cur­rently exper­i­en­cing a peak in con­nec­tion­ist AI thanks to its revolu­tion­ary res­ults. Nev­er­the­less, the prob­lem of mak­ing the res­ults com­pre­hens­ible leaves the way open for hybrid sys­tems (con­nec­tion­ist and sym­bol­ic) to put know­ledge back into clas­sic prob­ab­il­ist­ic approaches”.

For her part, Anna­belle Blan­gero points out that today “there is some debate as to wheth­er expert sys­tems really cor­res­pond to AI, giv­en that there is a tend­ency to describe sys­tems that neces­sar­ily involve machine learn­ing as AI”. Nev­er­the­less, Daniel And­ler men­tions one of the lead­ing fig­ures in AI, Stu­art Rus­sell, who remains very attached to sym­bol­ic AI. Maxime Amblard also agrees: “Per­haps my vis­ion is too influ­enced by the his­tory and epi­stem­o­logy of AI, but I think that to describe some­thing as intel­li­gent, it is more import­ant to ask how what is pro­duced by the com­pu­ta­tion is able to change the world, rather than focus­ing on the nature of the tool used.”

Does the machine resemble us?

After the his­tor­ic­al and defin­i­tion­al diver­sions, the fol­low­ing ques­tion arises: are AI and HI two sides of the same coin? Before we can come up with an answer, we need to look at the meth­od­o­lo­gic­al frame­work that makes this com­par­is­on pos­sible. For Daniel And­ler, “func­tion­al­ism is the frame­work par excel­lence with­in which the ques­tion of com­par­is­on arises, provided that we call ‘intel­li­gence’ the com­bined res­ult of cog­nit­ive func­tions”. How­ever, some­thing is almost cer­tainly miss­ing if we are to get as close as pos­sible to human intel­li­gence, situ­ated in time and space. “His­tor­ic­ally, it was John Hauge­land who developed this idea of a miss­ing ingredi­ent in AI. We often think of con­scious­ness, inten­tion­al­ity, autonomy, emo­tions or even the body”, Daniel And­ler explains.

Con­scious­ness and the asso­ci­ated men­tal states seem to be miss­ing from AI. For Anna­belle Blan­gero, this miss­ing ingredi­ent is simply a ques­tion of tech­nic­al means: “I come from a school of thought in neur­os­cience where we con­sider that con­scious­ness emerges from the con­stant eval­u­ation of the envir­on­ment and asso­ci­ated sens­ory-motor reac­tions. Based on this prin­ciple, repro­du­cing human mul­timod­al­ity in a robot should bring out the same char­ac­ter­ist­ics. Today, the archi­tec­ture of con­nec­tion­ist sys­tems repro­duces fairly closely what hap­pens in the human brain. What’s more, sim­il­ar meas­ures of activ­ity are used in bio­lo­gic­al and arti­fi­cial neur­al networks”.

Nev­er­the­less, as Daniel And­ler points out, “Today, there is no single the­ory to account for con­scious­ness in humans. The ques­tion of its emer­gence is wide open and the sub­ject of much debate in the sci­entif­ic-philo­soph­ic­al com­munity”. For Maxime Amblard, the fun­da­ment­al dif­fer­ence lies in the desire to make sense. “Humans con­struct explan­at­ory mod­els for what they per­ceive. We are ver­it­able mean­ing-mak­ing machines.”

The thorny question of intelligence

Des­pite this well-argued devel­op­ment, the ques­tion of bring­ing AI and HI togeth­er remains unanswered. In fact, the prob­lem is primar­ily con­cep­tu­al and con­cerns the way in which we define intelligence.

A clas­sic defin­i­tion would describe intel­li­gence as the set of abil­it­ies that enable us to solve prob­lems. In his recent book, Intel­li­gence arti­fi­ci­elle, intel­li­gence humaine: la double énigme, Daniel And­ler pro­poses an altern­at­ive, eleg­ant defin­i­tion: “anim­als (human or non-human) deploy the abil­ity to adapt to situ­ations. They learn to solve prob­lems that con­cern them, in time and space. They couldn’t care less about solv­ing gen­er­al, decon­tex­tu­al­ised problems”.

This defin­i­tion, which is open to debate, has the mer­it of pla­cing intel­li­gence in con­text and not mak­ing it an invari­ant concept. The math­em­atician and philo­soph­er also reminds us of the nature of the concept of intel­li­gence. “Intel­li­gence is what we call a dense concept: it is both descript­ive and object­ive, appre­ci­at­ive and sub­ject­ive. Although in prac­tice we can quickly come to a con­clu­sion about a person’s intel­li­gence in a giv­en situ­ation, in prin­ciple it’s always open to debate”.

Putting AI to work for humans

In the end, the issue of com­par­is­on seems irrel­ev­ant if we are look­ing for a con­crete answer. It is of great­er interest if we seek to under­stand the intel­lec­tu­al path we have trav­elled, the pro­cess. This reflec­tion high­lights some cru­cial ques­tions: what do we want to give AI? To what end? What do we want for the future of our societies?

These are essen­tial ques­tions that revive the eth­ic­al, eco­nom­ic, legis­lat­ive, and social chal­lenges that need to be taken up by the play­ers in the world of AI and by gov­ern­ments and cit­izens the world over. At the end of the day, there is no point in know­ing wheth­er AI is or will be like us. The only import­ant ques­tion is what do we want to do with it and why?

Julien Hernandez

Support accurate information rooted in the scientific method.

Donate