2_algorithme
π Science and technology π Health and biotech π Planet
Biomimicry: when science draws inspiration from nature

Algorithms: a biomimetic approach to performance and nuance

with Clément Viricel, Doctorate in mathematics and computer science applied to biology and Laurent Pujo-Menjouet, Lecturer and researcher in mathematics applied to biology and medicine at the Université Claude Bernard Lyon 1 and senior lecturer and researcher at the Institut Camille Jordan
On October 25th, 2023 |
4 min reading time
Clément Viricel
Clément Viricel
Doctorate in mathematics and computer science applied to biology
Laurent Pujo
Laurent Pujo-Menjouet
Lecturer and researcher in mathematics applied to biology and medicine at the Université Claude Bernard Lyon 1 and senior lecturer and researcher at the Institut Camille Jordan
Key takeaways
  •  Algorithms are biomimetic systems, since they are closely linked to the way neurones work.
  • Biomimicry is used in development of many algorithms, such as “genetic” algorithms and convolutional (or recurrent) neural networks.
  • Inspired by humans, researchers have sought to improve the speed of algorithms by adding an “attention layer” to neural networks.
  • The challenge for the future is to reduce the energy footprint of these innovations.

Bio­mi­met­ics is no stranger to the rap­id pro­gress and breath­tak­ing per­form­ance of today’s algorithms. But the IT com­munity is still strug­gling to integ­rate the true power of the liv­ing world: its energy efficiency.

Bio­mi­met­ics has been part of the his­tory of algorithmics since its earli­est devel­op­ments. “In 1964, the first neur­al net­work, the per­ceptron, was already bio­mi­met­ic. It sought to repro­duce the elec­tro­physiolo­gic­al prop­er­ties of neur­ons, their excit­ab­il­ity and abil­ity to trans­mit inform­a­tion”, explains Clé­ment Viricel, lec­turer at Lyon Uni­ver­sity. Each neur­on receives data, eval­u­ates it and pro­duces a res­ult accord­ing to the func­tion spe­cified in the algorithm. This pro­cess con­sti­tutes the “activ­a­tion” of the arti­fi­cial neur­on, just as a neur­on is activ­ated in the brain by nerve impulses. In the per­ceptron, the neur­ons were con­nec­ted in a single lay­er. It was by mul­tiply­ing the lay­ers of neur­ons that it pro­cessed the flow of information.

Neural networks

From the 1990s onwards, train­ing algorithms adop­ted these neur­al net­works in an attempt to repro­duce the way in which humans learn. “Neur­al net­works are bio­mi­met­ic because they learn by error, rather like humans or babies. Plas­ti­city can be rep­res­en­ted by matrices whose ele­ments are weighted accord­ing to suc­cess. The coef­fi­cients play the role of rein­force­ment between neur­ons”, explains Laurent Pujo-Men­jou­et. Clem­ent Viricel adds, “For example, when learn­ing a lan­guage, humans often dis­cov­er the mean­ing of a word through con­text. Semantics play a cru­cial role. This is what neur­al net­works began to do, by being trained with texts in which a word was miss­ing. Then they were optim­ised by back­propaga­tion”. In oth­er words, by cor­rect­ing the weights of the input neur­ons accord­ing to the out­put res­ults. “But this pro­cess is a black box, where the vari­ations in weight­ing that enable the algorithm to evolve are not vis­ible,” adds Clé­ment Viricel. And we know that it’s dif­fi­cult to trust a pro­cess if you don’t under­stand how it works. These meth­ods are a major head­ache for under­writers in charge of products that incor­por­ate them, such as autonom­ous vehicles1 or dia­gnost­ic assist­ance sys­tems2.

Bio­mim­icry con­trib­utes to the devel­op­ment of a large num­ber of algorithms. These include so-called “genet­ic” algorithms, which are based on phylo­gen­et­ic trees for cal­cu­la­tion pur­poses, and enable the most rel­ev­ant res­ult to be selec­ted accord­ing to sev­er­al meth­ods (by rank, tour­na­ment, adapt­a­tion, etc.). Sys­tems such as these have been deployed in search of optim­ums, but also in the devel­op­ment of games, such as the fam­ous Super Mario, to rank the play­ers amongst them­selves. There are also con­vo­lu­tion­al neur­al net­works, inspired by the human visu­al net­work. “Its developers wanted to repro­duce the way in which the eye ana­lyses an image. It’s a square of neur­ons, which scan the image to cap­ture the pixels before recon­struct­ing it in its entirety”, explains Clé­ment Viricel. This tool is renowned for hav­ing sur­passed the expert eye, par­tic­u­larly in the dia­gnos­is of melan­oma3. How does it work? “Dur­ing the train­ing peri­od it extracts char­ac­ter­ist­ics such as ‘tumour shape’ and ‘tumour size’. Then it looks for these char­ac­ter­ist­ics to recog­nise a par­tic­u­lar object”, explains Clé­ment Viricel.

These bio­mi­met­ic algorithms are applied to all sub­jects, as exem­pli­fied by recur­rent neur­al net­works. “They are used to ana­lyse data sequen­tially or over time.  It is widely used for auto­mat­ic pro­cessing of texts, by tak­ing word order into account. Dense lay­ers are recur­rent so that the net­work doesn’t for­get what it has done before”, explains Clé­ment Viricel. Such net­works have been used to build machine trans­la­tion tools. A first recur­rent net­work “reads” and encodes the text in the ori­gin­al lan­guage, a second recur­rent net­work decodes the text in anoth­er lan­guage, all of which takes time and energy. “They require a lot of energy to train”, admits Clé­ment Viricel.

Transformers

So, we need to be able to learn faster. Spe­cial­ists there­fore devised a way of repro­du­cing lex­ic­al depend­ency: when a human learns a text, they impli­citly know what each pro­noun refers to. This makes sen­tences light­er. “To repro­duce this, we had to add an extra lay­er of neur­ons, the atten­tion lay­er. And this is the para­met­er on which the latest bio­mi­met­ic evol­u­tion has taken place”, explains the spe­cial­ist. The invent­ors of these new arti­fi­cial intel­li­gences have titled their art­icle “Atten­tion is all you need”. In fact, their net­work con­sists of just 12 lay­ers of atten­tion and an encoder/decoder sys­tem. These net­works are called “trans­formers”, and are the mod­els used by Google’s Bert and Bloom, the Hug­ging Face start-up foun­ded by three French­men. (Chat-)GPT is a dir­ect des­cend­ant of the trans­formers, although it has only the decoder and no encoder.

This whole story is a good example of how bio­mi­met­ics has fed into algorithmic innov­a­tion, while for­get­ting one of the essen­tial char­ac­ter­ist­ics of liv­ing organ­isms: energy effi­ciency. For example, train­ing GPT‑3 Cat required 1.287 MWh and emit­ted 552 tonnes of CO24 “Until now, developers haven’t been at all inter­ested in the energy foot­print of their net­works”, admits Clé­ment Viricel. “It’s a skills prob­lem. The people who design the algorithms are not the same people who build the phys­ic­al com­pon­ents. We for­get the machine aspect. Recent tools con­sume an enorm­ous amount of power… and the next sys­tems, TPU or HPU, won’t be any more envir­on­ment­ally friendly,” explains the specialist.

The change could come from the next gen­er­a­tion of pro­gram­mers. “We are see­ing the emer­gence of a move­ment in the com­munity to address this issue. On the one hand, this is due to the need to optim­ise energy con­sump­tion, but also for eth­ic­al reas­ons. For the moment, the improve­ments are purely mech­an­ic­al, based on energy trans­fer”, explains Clé­ment Viricel.  But oth­er aven­ues are emer­ging, such as zero-shot shock zero-shot learn­ing algorithms: “They work without train­ing, which saves on the cost of learn­ing”, adds the spe­cial­ist. It remains to be seen wheth­er their per­form­ance can com­pete with that of their pre­de­cessors to pro­duce totally bio­mi­met­ic systems.

Agnès Vernet
1https://​www​.poly​tech​nique​-insights​.com/​t​r​i​b​u​n​e​s​/​s​c​i​e​n​c​e​/​d​e​s​-​a​l​g​o​r​i​t​h​m​e​s​-​p​o​u​r​-​g​u​i​d​e​r​-​l​e​s​-​t​a​x​i​s​-​v​o​l​ants/
2https://​cata​lyst​.nejm​.org/​d​o​i​/​f​u​l​l​/​1​0​.​1​0​5​6​/​C​A​T​.​2​1​.0242
3https://​www​.nature​.com/​a​r​t​i​c​l​e​s​/​n​a​t​u​r​e​21056
4https://​arx​iv​.org/​f​t​p​/​a​r​x​i​v​/​p​a​p​e​r​s​/​2​2​0​4​/​2​2​0​4​.​0​5​1​4​9.pdf

Support accurate information rooted in the scientific method.

Donate