neuroscienceEtSociete_05curiosite
π Society
Neuroscience: our relationship with intelligence

Mind of an AI: from curiosity to autonomy

Agnès Vernet, Science journalist
On February 18th, 2021 |
4 min reading time
Pierre-Yves Oudeyer
Pierre-Yves Oudeyer
Inria Research director and head of FLOWERS team at Inria/Ensta Paris (IP Paris)
Key takeaways
  • Research in artificial intelligence feeds off cognitive sciences, but now neurobiology is also making progress thanks to algorithmic models.
  • Curiosity, referred to as “intrinsic motivation” by psychologists, is a necessary trait for independent learning in children.
  • For Dr. Pierre-Yves Oudeyer, this mechanism can also be applied to machines.
  • As such, his research explores human cognition to improve artificial intelligence… and vice-versa.

How does one mea­sure the intel­li­gence of an arti­fi­cial intelligence? 

It isn’t easy because the term “arti­fi­cial intel­li­gence” is used by the gen­er­al pub­lic to refer to objects devel­oped in this research field, such as soft­ware equipped with a learn­ing sys­tem. In fact, arti­fi­cial intel­li­gence is not a thing. Rather it is a field of study which tries to mod­el func­tions of the human mind, like mem­o­ry, rea­son­ing, learn­ing or lan­guage. We there­fore can­not mea­sure intelligence.

More­over, the notion of ‘intel­li­gence’ makes no sense gen­er­al­ly speak­ing. For exam­ple, it’s impos­si­ble to say that an earth­worm is more stu­pid than a human. Instead, each liv­ing being has behav­ioral and mor­pho­log­i­cal char­ac­ter­is­tics result­ing from an evo­lu­tion­ary process relat­ed to its par­tic­u­lar envi­ron­ment. Earth­worms can find food in the ground. And, in their own ecosys­tem, human beings have social inter­ac­tions – lin­guis­tic or cul­tur­al – with oth­ers. An earth­worm wouldn’t know what to do in our ecosys­tem and a human wouldn’t do any bet­ter in soil.

Tech­nolo­gies, too, are devel­oped in very spe­cif­ic con­texts. It can­not be said that smart­phone voice recog­ni­tion sys­tems are stu­pid because they don’t under­stand the mean­ing of the sen­tences that they tran­scribe. They were not trained to that end, it’s not part of their ‘ecosys­tem’. 

The soft­ware you men­tion tran­scribes and learns, could it also understand?

Fun­da­men­tal­ly, the mean­ing we asso­ciate with a sen­tence is per­son­i­fied, it is inter­pret­ed based on the sen­so­ry and motor expe­ri­ences of our body in its envi­ron­ment. If a machine does not have access to a body to phys­i­cal­ly inter­act with our world, it stands no chance of inter­pret­ing sen­tences like we do.

How­ev­er, we can train lan­guage mod­els with large text data­bas­es. Machines can then detect sta­tis­ti­cal pat­terns and do aston­ish­ing tasks, such as answer­ing a sim­ple ques­tion, by pre­dict­ing the struc­tures of sen­tences accord­ing to a giv­en con­text. These tools are very use­ful in the world of indus­try, for human-machine inter­face, when machines must inter­pret direc­tives based on the con­text. In order to do this, unlike humans, they don’t nec­es­sar­i­ly need to under­stand sentences.

In your research you state that, in humans, part of learn­ing is dri­ven by curios­i­ty. Can it be applied to software? 

This issue is at the heart of my team’s research. We study the mech­a­nisms of curios­i­ty, or what psy­chol­o­gists call “intrin­sic moti­va­tion”. It allows liv­ing beings to under­take inde­pen­dent learn­ing. We devel­op algo­rith­mic mod­els of curios­i­ty to high­light the mech­a­nisms involved, such as spon­ta­neous explo­ration. This plays a fun­da­men­tal role in the sen­so­ry, cog­ni­tive and motor devel­op­ment of humans.

We then test our the­o­ries with vol­un­teers or machines. In doing so, we dis­cov­ered that to effi­cient­ly explore its envi­ron­ment, a robot must pick the areas where it makes the most progress, mean­ing those in which the gap between pre­dic­tion and real­i­ty tends to decrease. For exam­ple, it is in its inter­est to play with an object with which he tends to make progress rather than anoth­er that he imme­di­ate­ly mas­ters or, on the con­trary, that he can­not use at all. We showed that, in the­o­ry, this strat­e­gy is effi­cient for robots. The ques­tion remains open as to whether humans use this mea­sure to make progress and to guide the explo­ration of their surroundings.

But could this mea­sure of progress explain why humans tend to pre­fer activ­i­ties that they are able to learn easily?

Yes. The mech­a­nism of explo­ration accord­ing to progress leads to a “snow­ball” effect: when explor­ing an activ­i­ty, ini­ti­at­ed ran­dom­ly or oth­er con­tin­gent fac­tors, we devel­op knowl­edge or skills which will make sim­i­lar types of activ­i­ties eas­i­er to learn. This encour­ages the indi­vid­ual to pur­sue that course of action; also asso­ci­at­ed with the plea­sure response in the brain upon explor­ing new activities.

This fun­da­men­tal hypoth­e­sis explains the diver­si­ty of learn­ing tra­jec­to­ries found in dif­fer­ent peo­ple. To con­firm that, we com­pared the behav­ior of adult vol­un­teers to that pre­dict­ed by our com­pu­ta­tion­al mod­el. These analy­ses showed that learn­ing progress and per­for­mance for each task are mea­sures used by humans to guide their explo­ration. Both work in dif­fer­ent ways: the com­bi­na­tion of these dif­fer­ences and the snow­ball effect men­tioned above sup­ports the idea that there is a diver­si­ty in learn­ing path­ways, which explains dif­fer­ences between individuals.

Does this mod­el improve machines?

Our the­o­ries can some­times be imple­ment­ed in machines to make them more adapt­able. But the explorato­ry behav­ior of humans is not nec­es­sar­i­ly the most opti­mal choice. For exam­ple, oth­er curios­i­ty mech­a­nisms are bet­ter suit­ed for robots des­tined to autonomous­ly explore the ocean floor or the sur­face of Mars – if only to pre­vent these machines from mak­ing dan­ger­ous choic­es as much as possible.

Can these tools also help humans to learn better?

Yes, there are indeed appli­ca­tions in the field of edu­ca­tion. We designed soft­ware that cre­at­ed cus­tom exer­cise sequences for stu­dents in math­e­mat­ics. The objec­tive is to adapt a series to each child that will opti­mise both his/her learn­ing and moti­va­tion. We know that the lat­ter is an impor­tant fac­tor in aca­d­e­m­ic fail­ure. Moti­va­tion­al aspects prompt stu­dents to per­se­vere and try hard­er. Using curios­i­ty mod­els, we devel­oped algo­rithms that inter­act with each child indi­vid­u­al­ly to offer a moti­va­tion­al series of exer­cis­es accord­ing to the child’s profile.

In a pre­vi­ous project named Kidlearn we showed that, on aver­age, a greater diver­si­ty of stu­dents made more progress thanks to our software’s pro­pos­als than with a teach­ing expert. A fig­ure that includes chil­dren with a range of dif­fi­cul­ties or abil­i­ties, too. This ben­e­fit was asso­ci­at­ed with a high­er degree of intrin­sic moti­va­tion. We are now work­ing with a con­sor­tium of indus­tri­als in the field of edu­ca­tion tech­nol­o­gy (edTech) in order to trans­fer this approach into a dig­i­tal edu­ca­tion­al sys­tem intend­ed to be used on a large scale in pri­ma­ry schools in France (Adap­tiv’­Maths project). My col­league Hélène Sauzéon even showed that this sys­tem can facil­i­tate learn­ing for chil­dren suf­fer­ing from devel­op­men­tal dis­or­ders such as autism.

Our world explained with science. Every week, in your inbox.

Get the newsletter