IA IH foncée
π Digital π Science and technology
What are the next challenges for AI?

Are artificial intelligence and human intelligence comparable?

Daniel Andler, Professor emeritus in Philosophy of Science at Sorbonne Université, Maxime Amblard, Professor of Computer Science at Université de Lorraine and Annabelle Blangero, PhD in Neuroscience and Data Science Manager at Ekimetrics
On January 17th, 2024 |
5 min reading time
Daniel Andler
Daniel Andler
Professor emeritus in Philosophy of Science at Sorbonne Université
Maxime Amblard
Maxime Amblard
Professor of Computer Science at Université de Lorraine
Annabelle Blangero
PhD in Neuroscience and Data Science Manager at Ekimetrics
Key takeaways
  • Artificial intelligence and human intelligence are inevitably compared.
  • This confrontation is intrinsic to the history of AI: some approaches are inspired by human cognition, while others are completely independent from it.
  • The imprecise and controversial definition of intelligence makes this comparison vague.
  • Consciousness remains one of the main elements that AI seems to lack in order to imitate human intelligence.
  • The question of comparison in fact raises ethical issues about the use, purpose and regulation of AI.

Arti­fi­cial intel­li­gence (AI) is dis­rupt­ing the world as we know it. It is per­me­at­ing every part of our lives, with more or less desir­able and ambi­tious goals. Inevitably, AI and human intel­li­gence (HI) are being com­pared. Far from com­ing out of nowhere, this con­fronta­tion can be explained by his­tor­i­cal dynam­ics inscribed deep with­in the AI project.

A long-standing comparison

AI and HI, as fields of study, have co-evolved. There have been two dis­tinct approach­es since the ear­ly days of mod­ern com­put­ing: evo­lu­tion by par­al­lelism or by dis­re­gard. “The founders of AI were divid­ed into two approach­es. On the one hand, those who want­ed to analyse human men­tal process­es and repro­duce them on a com­put­er, in a mir­ror image, so that the two under­tak­ings would feed off each oth­er. On the oth­er, those who saw HI as a lim­i­ta­tion rather than an inspi­ra­tion. This trend was inter­est­ed in prob­lem solv­ing, in oth­er words in the result and not the process”, recalls Daniel Andler.

Our ten­den­cy to com­pare AI and HI in numer­ous pub­li­ca­tions is there­fore not a cur­rent trend, but part of the his­to­ry of AI. What is symp­to­matic of our times is the ten­den­cy to equate the entire dig­i­tal world with AI: “Today, all com­put­ing is described as AI. You have to go back to the foun­da­tions of the dis­ci­pline to under­stand that AI is a spe­cif­ic tool, defined by the cal­cu­la­tion that is being made and the nature of the task it is solv­ing. If the task seems to involve human skills, we will be look­ing at the capac­i­ty for intel­li­gence. That, in essence, is what AI is all about”, explains Maxime Amblard.

Two branches of the same tree

The two major trends men­tioned above have giv­en rise to two major cat­e­gories of AI:

  • sym­bol­ic AI, based on log­i­cal infer­ence rules, which has lit­tle to do with human cognition
  • con­nec­tion­ist AI, based on neur­al net­works, which is inspired by human cognition.

Maxime Amblard takes us back to the con­text of the time: “In the mid­dle of the 20th cen­tu­ry, the com­put­ing capac­i­ty of com­put­ers was tiny com­pared with today. So, we thought that to have intel­li­gent sys­tems, the cal­cu­la­tion would have to con­tain expert infor­ma­tion that we had pre­vi­ous­ly encod­ed in the form of rules and sym­bols. At the same time, oth­er researchers were more inter­est­ed in how exper­tise could be gen­er­at­ed. The ques­tion then became: how can we con­struct a prob­a­bil­i­ty dis­tri­b­u­tion that will pro­vide a good expla­na­tion of how the world works? It’s easy to see why these approach­es explod­ed when the avail­abil­i­ty of data and mem­o­ry and com­put­ing capac­i­ty increased radically”.

To illus­trate the his­tor­i­cal devel­op­ment of these two branch­es, Maxime Amblard uses the metaphor of two skis advanc­ing one after the oth­er: “Before com­put­ing pow­er became avail­able, prob­a­bilis­tic mod­els were ignored in favour of sym­bol­ic mod­els. We are cur­rent­ly expe­ri­enc­ing a peak in con­nec­tion­ist AI thanks to its rev­o­lu­tion­ary results. Nev­er­the­less, the prob­lem of mak­ing the results com­pre­hen­si­ble leaves the way open for hybrid sys­tems (con­nec­tion­ist and sym­bol­ic) to put knowl­edge back into clas­sic prob­a­bilis­tic approaches”.

For her part, Annabelle Blangero points out that today “there is some debate as to whether expert sys­tems real­ly cor­re­spond to AI, giv­en that there is a ten­den­cy to describe sys­tems that nec­es­sar­i­ly involve machine learn­ing as AI”. Nev­er­the­less, Daniel Andler men­tions one of the lead­ing fig­ures in AI, Stu­art Rus­sell, who remains very attached to sym­bol­ic AI. Maxime Amblard also agrees: “Per­haps my vision is too influ­enced by the his­to­ry and epis­te­mol­o­gy of AI, but I think that to describe some­thing as intel­li­gent, it is more impor­tant to ask how what is pro­duced by the com­pu­ta­tion is able to change the world, rather than focus­ing on the nature of the tool used.”

Does the machine resemble us?

After the his­tor­i­cal and def­i­n­i­tion­al diver­sions, the fol­low­ing ques­tion aris­es: are AI and HI two sides of the same coin? Before we can come up with an answer, we need to look at the method­olog­i­cal frame­work that makes this com­par­i­son pos­si­ble. For Daniel Andler, “func­tion­al­ism is the frame­work par excel­lence with­in which the ques­tion of com­par­i­son aris­es, pro­vid­ed that we call ‘intel­li­gence’ the com­bined result of cog­ni­tive func­tions”. How­ev­er, some­thing is almost cer­tain­ly miss­ing if we are to get as close as pos­si­ble to human intel­li­gence, sit­u­at­ed in time and space. “His­tor­i­cal­ly, it was John Hauge­land who devel­oped this idea of a miss­ing ingre­di­ent in AI. We often think of con­scious­ness, inten­tion­al­i­ty, auton­o­my, emo­tions or even the body”, Daniel Andler explains.

Con­scious­ness and the asso­ci­at­ed men­tal states seem to be miss­ing from AI. For Annabelle Blangero, this miss­ing ingre­di­ent is sim­ply a ques­tion of tech­ni­cal means: “I come from a school of thought in neu­ro­science where we con­sid­er that con­scious­ness emerges from the con­stant eval­u­a­tion of the envi­ron­ment and asso­ci­at­ed sen­so­ry-motor reac­tions. Based on this prin­ci­ple, repro­duc­ing human mul­ti­modal­i­ty in a robot should bring out the same char­ac­ter­is­tics. Today, the archi­tec­ture of con­nec­tion­ist sys­tems repro­duces fair­ly close­ly what hap­pens in the human brain. What’s more, sim­i­lar mea­sures of activ­i­ty are used in bio­log­i­cal and arti­fi­cial neur­al networks”.

Nev­er­the­less, as Daniel Andler points out, “Today, there is no sin­gle the­o­ry to account for con­scious­ness in humans. The ques­tion of its emer­gence is wide open and the sub­ject of much debate in the sci­en­tif­ic-philo­soph­i­cal com­mu­ni­ty”. For Maxime Amblard, the fun­da­men­tal dif­fer­ence lies in the desire to make sense. “Humans con­struct explana­to­ry mod­els for what they per­ceive. We are ver­i­ta­ble mean­ing-mak­ing machines.”

The thorny question of intelligence

Despite this well-argued devel­op­ment, the ques­tion of bring­ing AI and HI togeth­er remains unan­swered. In fact, the prob­lem is pri­mar­i­ly con­cep­tu­al and con­cerns the way in which we define intelligence.

A clas­sic def­i­n­i­tion would describe intel­li­gence as the set of abil­i­ties that enable us to solve prob­lems. In his recent book, Intel­li­gence arti­fi­cielle, intel­li­gence humaine: la dou­ble énigme, Daniel Andler pro­pos­es an alter­na­tive, ele­gant def­i­n­i­tion: “ani­mals (human or non-human) deploy the abil­i­ty to adapt to sit­u­a­tions. They learn to solve prob­lems that con­cern them, in time and space. They couldn’t care less about solv­ing gen­er­al, decon­tex­tu­alised problems”.

This def­i­n­i­tion, which is open to debate, has the mer­it of plac­ing intel­li­gence in con­text and not mak­ing it an invari­ant con­cept. The math­e­mati­cian and philoso­pher also reminds us of the nature of the con­cept of intel­li­gence. “Intel­li­gence is what we call a dense con­cept: it is both descrip­tive and objec­tive, appre­cia­tive and sub­jec­tive. Although in prac­tice we can quick­ly come to a con­clu­sion about a person’s intel­li­gence in a giv­en sit­u­a­tion, in prin­ci­ple it’s always open to debate”.

Putting AI to work for humans

In the end, the issue of com­par­i­son seems irrel­e­vant if we are look­ing for a con­crete answer. It is of greater inter­est if we seek to under­stand the intel­lec­tu­al path we have trav­elled, the process. This reflec­tion high­lights some cru­cial ques­tions: what do we want to give AI? To what end? What do we want for the future of our societies?

These are essen­tial ques­tions that revive the eth­i­cal, eco­nom­ic, leg­isla­tive, and social chal­lenges that need to be tak­en up by the play­ers in the world of AI and by gov­ern­ments and cit­i­zens the world over. At the end of the day, there is no point in know­ing whether AI is or will be like us. The only impor­tant ques­tion is what do we want to do with it and why?

Julien Hernandez

Our world explained with science. Every week, in your inbox.

Get the newsletter