Home / Chroniques / Understanding short-term memory through neuronal plasticity
Artistic depiction of a brain with gears and cogs inside, representing the workings of advertising strategies and campaigns
π Neuroscience

Understanding short-term memory through neuronal plasticity

David Clark
David Clark
PhD Student in Neurobiology and Behavior at Columbia University
Key takeaways
  • Synapses, not neurons, play the main role in working memory.
  • To simplify the analysis of neural networks, early studies considered neurons to be ‘fixed’, thereby obscuring synaptic plasticity.
  • Researchers at Columbia University updated the theory by including synaptic and neuronal dynamics.
  • They discovered that synaptic dynamics can modulate the overall behaviour of neural networks, speeding up or slowing down neuronal activity.
  • A new behaviour, called ‘frozen chaos’, was identified, where synapses create fixed patterns of neuronal activity, potentially crucial for working memory.
  • There is still room for improvement in this model: neuroscientists now want to incorporate certain biological properties of the brain to make it more realistic.

What role do neu­rons and synaps­es play in work­ing mem­o­ry? This is a ques­tion that neu­ro­sci­en­tists have long pon­dered. Until now, it was thought that neu­ronal activ­i­ty dom­i­nat­ed, with synaps­es only involved in the slow­er process­es of learn­ing and mem­o­ry. But researchers at Colum­bia Uni­ver­si­ty have now devel­oped a new the­o­ret­i­cal frame­work that pre­dicts that synaps­es rather than neu­rons are more impor­tant. Their new mod­el might lead us to an alter­na­tive mech­a­nism for work­ing mem­o­ry in the brain, they say.

The human brain is made up of around 100 bil­lion neu­rons. Each neu­ron receives elec­tri­cal sig­nals from oth­er neu­rons via thou­sands of tiny con­nec­tions called synaps­es. When the sum of the sig­nals emit­ted by the synaps­es exceeds a cer­tain thresh­old, a neu­ron “fires” by send­ing a series of volt­age spikes to a large num­ber of oth­er neu­rons. Neu­rons are there­fore “excitable”: below a cer­tain input thresh­old, the out­put of the sys­tem is very small and lin­ear, but above the thresh­old it becomes large and non-linear.

The strength of inter­ac­tions between neu­rons can also change over time. This process, known as synap­tic plas­tic­i­ty, is thought to play a cru­cial role in learning.

With and without plasticity

To sim­pli­fy things, ear­ly stud­ies in this field con­sid­ered that neu­ronal net­works were non- plas­tic. They assumed that synap­tic con­nec­tiv­i­ty was fixed, and researchers analysed how this con­nec­tiv­i­ty shaped the col­lec­tive activ­i­ty of neu­rons. Although not real­is­tic, this approach has enabled us to under­stand the basic prin­ci­ples of these net­works and how they function.

David Clark, a doc­tor­al stu­dent in neu­ro­bi­ol­o­gy and behav­iour at Colum­bia Uni­ver­si­ty, and Lar­ry Abbott, his the­sis super­vi­sor, have now extend­ed this mod­el to plas­tic synaps­es. This makes the sys­tem more com­plex – and more real­is­tic – because neu­ronal activ­i­ty can now dynam­i­cal­ly shape the con­nec­tiv­i­ty between synapses.

The researchers used a math­e­mat­i­cal tool known as dynam­ic mean field the­o­ry to reduce the “high-dimen­sion­al” net­work equa­tions of the orig­i­nal mod­el to a “low-dimen­sion­al” sta­tis­ti­cal descrip­tion. In short, they mod­i­fied the the­o­ry to include synap­tic and neu­ronal dynam­ics. This allowed them to devel­op a sim­pler mod­el that incor­po­rates many of the impor­tant fac­tors involved in plas­tic neur­al net­works. “The main chal­lenge was to cap­ture all the dynam­ics of neu­rons and synaps­es while main­tain­ing an ana­lyt­i­cal­ly solv­able mod­el,” explains David Clark.

Synaptic dynamics become important

The researchers found that when synap­tic dynam­ics and neu­ronal dynam­ics occur on a sim­i­lar time scale, synap­tic dynam­ics become impor­tant in shap­ing the over­all behav­iour of a neur­al net­work. Their analy­ses also showed that synap­tic dynam­ics can speed up or slow down neu­ronal dynam­ics and there­fore rein­force or sup­press the chaot­ic activ­i­ty of neurons.

Above all, they dis­cov­ered a new type of behav­iour that appears when synaps­es gen­er­ate fixed pat­terns of neu­ronal activ­i­ty in net­works. These pat­terns appear when plas­tic­i­ty is momen­tar­i­ly deac­ti­vat­ed, which has the effect of “freez­ing” the states of the neu­rons. This “frozen chaos”, as the researchers call it, can help to store infor­ma­tion in the brain and is like the way work­ing mem­o­ry works.

The sci­en­tif­ic chal­lenge of our study was to trans­late this intu­ition into equa­tions and results.

“This research top­ic came about when Lar­ry Abbot raised the idea, while chat­ting in his office, that dynam­ic synaps­es play just as impor­tant a role in neu­ronal com­pu­ta­tion as neu­rons them­selves,” explains David Clark. “I found this idea very inter­est­ing, because it flips the typ­i­cal view of neu­rons as the dynam­ic units, with synaps­es only being involved in the slow­er learn­ing and mem­o­ry process­es. The sci­en­tif­ic chal­lenge in our study was to trans­late this intu­ition into equa­tions and results.”

“The new mod­el pro­vides a pos­si­ble new mech­a­nism for work­ing mem­o­ry,” he adds. “More gen­er­al­ly, we now have a solv­able mod­el of cou­pled neu­ronal and synap­tic dynam­ics that could be extend­ed, for exam­ple, to mod­el­ling how short-term mem­o­ry is con­sol­i­dat­ed into long-term memory.”

David Clark and Lar­ry Abbott now hope to make their mod­el more real­is­tic by incor­po­rat­ing cer­tain bio­log­i­cal prop­er­ties of the brain, includ­ing that neu­rons com­mu­ni­cate via dis­crete volt­age spikes. Oth­er impor­tant fea­tures, such as the fact that neu­rons are pat­terned in specif­i­cal­ly struc­tured con­nec­tions, will also have to be tak­en into account, they add.

Isabelle Dumé

Ref­er­ence: D. G. Clark and L. F. Abbott, “The­o­ry of cou­pled neu­ronal-synap­tic dynam­ics,” Phys. Rev. X 14, 021001 (2024).

Our world explained with science. Every week, in your inbox.

Get the newsletter