2_algorithme
π Science and technology π Health and biotech π Planet
Biomimicry: when science draws inspiration from nature

Algorithms: a biomimetic approach to performance and nuance

Clément Viricel, Doctorate in mathematics and computer science applied to biology and Laurent Pujo-Menjouet, Lecturer and researcher in mathematics applied to biology and medicine at the Université Claude Bernard Lyon 1 and senior lecturer and researcher at the Institut Camille Jordan
On October 25th, 2023 |
4 min reading time
Clément Viricel
Clément Viricel
Doctorate in mathematics and computer science applied to biology
Laurent Pujo
Laurent Pujo-Menjouet
Lecturer and researcher in mathematics applied to biology and medicine at the Université Claude Bernard Lyon 1 and senior lecturer and researcher at the Institut Camille Jordan
Key takeaways
  •  Algorithms are biomimetic systems, since they are closely linked to the way neurones work.
  • Biomimicry is used in development of many algorithms, such as “genetic” algorithms and convolutional (or recurrent) neural networks.
  • Inspired by humans, researchers have sought to improve the speed of algorithms by adding an “attention layer” to neural networks.
  • The challenge for the future is to reduce the energy footprint of these innovations.

Bio­mimet­ics is no stranger to the rapid progress and breath­tak­ing per­for­mance of today’s algo­rithms. But the IT com­mu­ni­ty is still strug­gling to inte­grate the true pow­er of the liv­ing world: its ener­gy efficiency.

Bio­mimet­ics has been part of the his­to­ry of algo­rith­mics since its ear­li­est devel­op­ments. “In 1964, the first neur­al net­work, the per­cep­tron, was already bio­mimet­ic. It sought to repro­duce the elec­tro­phys­i­o­log­i­cal prop­er­ties of neu­rons, their excitabil­i­ty and abil­i­ty to trans­mit infor­ma­tion”, explains Clé­ment Viri­cel, lec­tur­er at Lyon Uni­ver­si­ty. Each neu­ron receives data, eval­u­ates it and pro­duces a result accord­ing to the func­tion spec­i­fied in the algo­rithm. This process con­sti­tutes the “acti­va­tion” of the arti­fi­cial neu­ron, just as a neu­ron is acti­vat­ed in the brain by nerve impuls­es. In the per­cep­tron, the neu­rons were con­nect­ed in a sin­gle lay­er. It was by mul­ti­ply­ing the lay­ers of neu­rons that it processed the flow of information.

Neural networks

From the 1990s onwards, train­ing algo­rithms adopt­ed these neur­al net­works in an attempt to repro­duce the way in which humans learn. “Neur­al net­works are bio­mimet­ic because they learn by error, rather like humans or babies. Plas­tic­i­ty can be rep­re­sent­ed by matri­ces whose ele­ments are weight­ed accord­ing to suc­cess. The coef­fi­cients play the role of rein­force­ment between neu­rons”, explains Lau­rent Pujo-Men­jou­et. Clement Viri­cel adds, “For exam­ple, when learn­ing a lan­guage, humans often dis­cov­er the mean­ing of a word through con­text. Seman­tics play a cru­cial role. This is what neur­al net­works began to do, by being trained with texts in which a word was miss­ing. Then they were opti­mised by back­prop­a­ga­tion”. In oth­er words, by cor­rect­ing the weights of the input neu­rons accord­ing to the out­put results. “But this process is a black box, where the vari­a­tions in weight­ing that enable the algo­rithm to evolve are not vis­i­ble,” adds Clé­ment Viri­cel. And we know that it’s dif­fi­cult to trust a process if you don’t under­stand how it works. These meth­ods are a major headache for under­writ­ers in charge of prod­ucts that incor­po­rate them, such as autonomous vehi­cles1 or diag­nos­tic assis­tance sys­tems2.

Bio­mimicry con­tributes to the devel­op­ment of a large num­ber of algo­rithms. These include so-called “genet­ic” algo­rithms, which are based on phy­lo­ge­net­ic trees for cal­cu­la­tion pur­pos­es, and enable the most rel­e­vant result to be select­ed accord­ing to sev­er­al meth­ods (by rank, tour­na­ment, adap­ta­tion, etc.). Sys­tems such as these have been deployed in search of opti­mums, but also in the devel­op­ment of games, such as the famous Super Mario, to rank the play­ers amongst them­selves. There are also con­vo­lu­tion­al neur­al net­works, inspired by the human visu­al net­work. “Its devel­op­ers want­ed to repro­duce the way in which the eye analy­ses an image. It’s a square of neu­rons, which scan the image to cap­ture the pix­els before recon­struct­ing it in its entire­ty”, explains Clé­ment Viri­cel. This tool is renowned for hav­ing sur­passed the expert eye, par­tic­u­lar­ly in the diag­no­sis of melanoma3. How does it work? “Dur­ing the train­ing peri­od it extracts char­ac­ter­is­tics such as ‘tumour shape’ and ‘tumour size’. Then it looks for these char­ac­ter­is­tics to recog­nise a par­tic­u­lar object”, explains Clé­ment Viricel.

These bio­mimet­ic algo­rithms are applied to all sub­jects, as exem­pli­fied by recur­rent neur­al net­works. “They are used to analyse data sequen­tial­ly or over time.  It is wide­ly used for auto­mat­ic pro­cess­ing of texts, by tak­ing word order into account. Dense lay­ers are recur­rent so that the net­work doesn’t for­get what it has done before”, explains Clé­ment Viri­cel. Such net­works have been used to build machine trans­la­tion tools. A first recur­rent net­work “reads” and encodes the text in the orig­i­nal lan­guage, a sec­ond recur­rent net­work decodes the text in anoth­er lan­guage, all of which takes time and ener­gy. “They require a lot of ener­gy to train”, admits Clé­ment Viricel.

Transformers

So, we need to be able to learn faster. Spe­cial­ists there­fore devised a way of repro­duc­ing lex­i­cal depen­den­cy: when a human learns a text, they implic­it­ly know what each pro­noun refers to. This makes sen­tences lighter. “To repro­duce this, we had to add an extra lay­er of neu­rons, the atten­tion lay­er. And this is the para­me­ter on which the lat­est bio­mimet­ic evo­lu­tion has tak­en place”, explains the spe­cial­ist. The inven­tors of these new arti­fi­cial intel­li­gences have titled their arti­cle “Atten­tion is all you need”. In fact, their net­work con­sists of just 12 lay­ers of atten­tion and an encoder/decoder sys­tem. These net­works are called “trans­form­ers”, and are the mod­els used by Google’s Bert and Bloom, the Hug­ging Face start-up found­ed by three French­men. (Chat-)GPT is a direct descen­dant of the trans­form­ers, although it has only the decoder and no encoder.

This whole sto­ry is a good exam­ple of how bio­mimet­ics has fed into algo­rith­mic inno­va­tion, while for­get­ting one of the essen­tial char­ac­ter­is­tics of liv­ing organ­isms: ener­gy effi­cien­cy. For exam­ple, train­ing GPT‑3 Cat required 1.287 MWh and emit­ted 552 tonnes of CO24 “Until now, devel­op­ers haven’t been at all inter­est­ed in the ener­gy foot­print of their net­works”, admits Clé­ment Viri­cel. “It’s a skills prob­lem. The peo­ple who design the algo­rithms are not the same peo­ple who build the phys­i­cal com­po­nents. We for­get the machine aspect. Recent tools con­sume an enor­mous amount of pow­er… and the next sys­tems, TPU or HPU, won’t be any more envi­ron­men­tal­ly friend­ly,” explains the specialist.

The change could come from the next gen­er­a­tion of pro­gram­mers. “We are see­ing the emer­gence of a move­ment in the com­mu­ni­ty to address this issue. On the one hand, this is due to the need to opti­mise ener­gy con­sump­tion, but also for eth­i­cal rea­sons. For the moment, the improve­ments are pure­ly mechan­i­cal, based on ener­gy trans­fer”, explains Clé­ment Viri­cel.  But oth­er avenues are emerg­ing, such as zero-shot shock zero-shot learn­ing algo­rithms: “They work with­out train­ing, which saves on the cost of learn­ing”, adds the spe­cial­ist. It remains to be seen whether their per­for­mance can com­pete with that of their pre­de­ces­sors to pro­duce total­ly bio­mimet­ic systems.

Agnès Vernet
1https://​www​.poly​tech​nique​-insights​.com/​t​r​i​b​u​n​e​s​/​s​c​i​e​n​c​e​/​d​e​s​-​a​l​g​o​r​i​t​h​m​e​s​-​p​o​u​r​-​g​u​i​d​e​r​-​l​e​s​-​t​a​x​i​s​-​v​o​l​ants/
2https://​cat​a​lyst​.nejm​.org/​d​o​i​/​f​u​l​l​/​1​0​.​1​0​5​6​/​C​A​T​.​2​1​.0242
3https://​www​.nature​.com/​a​r​t​i​c​l​e​s​/​n​a​t​u​r​e​21056
4https://​arx​iv​.org/​f​t​p​/​a​r​x​i​v​/​p​a​p​e​r​s​/​2​2​0​4​/​2​2​0​4​.​0​5​1​4​9.pdf

Our world explained with science. Every week, in your inbox.

Get the newsletter