π Digital π Science and technology
What are the next challenges for AI?

The future of brain-machine synchronisation

with Hamilton Mann, Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD, Cornelia C. Walther, Senior Visiting Scientist at Wharton Initiative for Neuroscience (WiN) and Michael Platt, Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania
On October 30th, 2024 |
9 min reading time
Hamilton Mann
Hamilton Mann
Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD
Cornelia C. Walther
Cornelia C. Walther
Senior Visiting Scientist at Wharton Initiative for Neuroscience (WiN)
Michael Platt
Michael Platt
Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania
Key takeaways
  • The evolution of AI represents a breakthrough in the relationship between humans and machines.
  • AI is now capable of generating responses similar to those of humans and adapting to the contexts of their interactions.
  • Advances such as brain-computer interfaces (BCIs) make it possible for AI to connect with human thoughts and emotions.
  • Neuroscience can also guide the development of AI, for example through the alternative of neuromorphic computing.
  • Despite their positive implications, the human-machine relationship raises major ethical issues, notably concerning data confidentiality and the preservation of human autonomy.

The remar­kable evo­lu­tion of Arti­fi­cial Intel­li­gence (AI) sys­tems repre­sents a para­digm shift in the rela­tion­ship bet­ween humans and machines. This trans­for­ma­tion is evident in the seam­less inter­ac­tions faci­li­ta­ted by these advan­ced sys­tems, where adap­ta­bi­li­ty emerges as a defi­ning cha­rac­te­ris­tic, reso­na­ting with the fun­da­men­tal human capa­ci­ty to learn from expe­rience and pre­dict behaviour.

AI mimics human learning

One facet of AI that ali­gns clo­se­ly with human cog­ni­tive pro­cesses is Rein­for­ce­ment Lear­ning (RL). RL mimics the human lear­ning para­digm by allo­wing AI sys­tems to learn through inter­ac­tion with an envi­ron­ment, recei­ving feed­back in the form of rewards or penal­ties. By contrast, Large Lan­guage Models (LLMs) play a cru­cial role in pat­tern recog­ni­tion, cap­tu­ring the intri­cate nuances of human lan­guage and beha­viour. These models, such as ChatGPT and BERT, excel in unders­tan­ding contex­tual infor­ma­tion, gras­ping the subt­le­ties of lan­guage, and pre­dic­ting user intent. Leve­ra­ging vast data­sets, LLMs acquire a com­pre­hen­sive unders­tan­ding of lin­guis­tic pat­terns, enabling them to gene­rate human-like res­ponses and adapt to some of the user beha­viour, some­times with remar­kable accuracy.

The syner­gy bet­ween RL and LLMs creates a power­ful pre­dic­tor of human beha­viour. RL contri­butes the abi­li­ty to learn from inter­ac­tions and adapt, while LLMs enhance the pre­dic­tion capa­bi­li­ties through pat­tern recog­ni­tion. AI sys­tems based on RL can thus dis­play a form of beha­viou­ral syn­chro­ny. At its core, RL enables AI sys­tems to learn opti­mal sequences of actions in inter­ac­tive envi­ron­ments to achieve a poli­cy. Ana­lo­gous to a child tou­ching a hot sur­face and lear­ning to avoid it, these AI sys­tems adapt based on the posi­tive or nega­tive feed­back they receive.

AI replicates human interactions

AI agents using deep rein­for­ce­ment lear­ning, such as Google Deep­Mind’s Alpha­Ze­ro, learn and improve by playing mil­lions of games against them­selves, the­re­by refi­ning their stra­te­gies over time. This self-impro­ve­ment pro­cess in AI involves an agent ite­ra­ti­ve­ly lear­ning from its own actions and out­comes. Simi­lar­ly, in human inter­ac­tions, brain syn­chro­ny occurs when indi­vi­duals engage in coope­ra­tive tasks, lea­ding to ali­gned pat­terns of brain acti­vi­ty that faci­li­tate sha­red unders­tan­ding and col­la­bo­ra­tion. Unlike AI, humans achieve this syn­chro­ny through inter­ac­tion with others rather than themselves.

What’s more, AI sys­tems can also learn from inter­ac­tions with humans. Just as human brain syn­chro­ny enhances coope­ra­tion and unders­tan­ding, AI sys­tems can improve and ali­gn their res­ponses through exten­sive ite­ra­tive lear­ning from human inter­ac­tions. While AI sys­tems do not lite­ral­ly share know­ledge as human brains do, they become repo­si­to­ries of data inhe­ri­ted from these inter­ac­tions, which cor­res­ponds to a form of know­ledge. This pro­cess of lear­ning from vast data­sets, inclu­ding human inter­ac­tions, can be seen as a form of ‘col­lec­tive memo­ry’. This ana­lo­gy high­lights the poten­tial for AI sys­tems to evolve while being influen­ced by humans, while also influen­cing humans through their use, indi­ca­ting a form of ‘com­pu­ta­tio­nal syn­chro­ny’ that could be seen as an ana­logue to human brain synchrony.

In addi­tion, AI sys­tems enabled with social cue recog­ni­tion are being desi­gned to detect and respond to human emo­tions. These ‘Affec­tive Com­pu­ting’ sys­tems, as coi­ned by Rosa­lind Picard in 19951, can inter­pret human facial expres­sions, voice modu­la­tions, and even text to gauge emo­tions and then respond accor­din­gly. An AI assis­tant that can detect user frus­tra­tion in real-time and adjust its res­ponses or assis­tance stra­te­gy is a rudi­men­ta­ry form of beha­viou­ral syn­chro­ni­sa­tion based on imme­diate feedback.

For ins­tance, affec­tive com­pu­ting encom­passes tech­no­lo­gies like emo­tion recog­ni­tion soft­ware that ana­lyses facial expres­sions and voice tone to deter­mine a person’s emo­tio­nal state. Real-time sen­ti­ment ana­ly­sis in text and voice allows AI to adjust its inter­ac­tions to be more empa­the­tic and effec­tive. This capa­bi­li­ty is increa­sin­gly used in cus­to­mer ser­vice chat­bots and vir­tual assis­tants to improve user expe­rience by making inter­ac­tions feel more natu­ral and responsive.

Just as humans adjust their beha­viour in res­ponse to social cues, adap­tive AI sys­tems modi­fy their actions based on user input, poten­tial­ly lea­ding to a form of ‘syn­chro­ni­sa­tion’ over time. Asses­sing the social com­pe­tence of such an AI sys­tem could be done by adap­ting tools like the Social Res­pon­si­ve­ness Scale (SRS)—a well-vali­da­ted psy­chia­tric ins­tru­ment that mea­sures how adept an indi­vi­dual is at modi­fying their beha­viour to fit the beha­viour and dis­po­si­tion of a social part­ner, a proxy for ‘theo­ry of mind,’ which refers to the abi­li­ty to attri­bute men­tal states—such as beliefs, intents, desires, emo­tions, and knowledge—to one­self and to others.

Moving towards resonance

Brain-Com­pu­ter Inter­faces (BCIs) have ushe­red in a trans­for­ma­tive era in which thoughts can be trans­la­ted into digi­tal com­mands and human com­mu­ni­ca­tion. Com­pa­nies like Neu­ra­link are making strides deve­lo­ping inter­faces that enable para­ly­sed indi­vi­duals to control devices direct­ly with their thoughts. Connec­ting direct recor­dings of brain acti­vi­ty with AI sys­tems, resear­chers enabled an indi­vi­dual to speak at nor­mal conver­sa­tio­nal speed after being mute for more than a decade fol­lo­wing a stroke. AI sys­tems can also be used to decode not only what an indi­vi­dual is rea­ding but what they are thin­king based on non-inva­sive mea­sures of brain acti­vi­ty using func­tio­nal MRI.

Based on these advances, it’s not far-fet­ched to ima­gine a future sce­na­rio in which a pro­fes­sio­nal uses a non-inva­sive BCI (e.g., wea­rable brain­wave moni­tors such as Cog­wear, Emo­tiv, or Muse) to com­mu­ni­cate with AI desi­gn soft­ware. The soft­ware, reco­gni­sing the designer’s neu­ral pat­terns asso­cia­ted with crea­ti­vi­ty or dis­sa­tis­fac­tion, could ins­tan­ta­neous­ly adjust its desi­gn pro­po­sals, achie­ving a level of syn­chro­ny pre­vious­ly thought to be the realm of science fic­tion. This tech­no­lo­gi­cal fron­tier holds the pro­mise of a dis­tinc­tive form of syn­chro­ny, where the inter­play bet­ween the human brain and AI trans­cends mere com­mand inter­pre­ta­tion, ope­ning up a future in which AI reso­nates with human thoughts and emotions.

Cru­cial­ly, the reso­nance envi­sio­ned here trans­cends the beha­viou­ral domain to encom­pass com­mu­ni­ca­tion as well. As BCIs evolve, the poten­tial for out­ward expres­sions becomes pivo­tal. Beyond mere com­mand exe­cu­tion, the inte­gra­tion of facial cues, tone of voice, and other non-ver­bal cues into AI’s res­ponses ampli­fies the chan­nels for reso­nance. This expan­sion into mul­ti­mo­dal com­mu­ni­ca­tion may enrich syn­chro­ny by cap­tu­ring ele­ments from the holis­tic nature of human expres­sion, crea­ting a more immer­sive and natu­ral interaction.

Howe­ver, the concept of reso­nance also pre­sents the chal­lenge of navi­ga­ting the uncan­ny val­ley, a phe­no­me­non where huma­noid enti­ties that clo­se­ly resemble humans pro­voke dis­com­fort. Stri­king the right balance is para­mount, ensu­ring the AI’s res­pon­si­ve­ness ali­gns authen­ti­cal­ly with human expres­sions, without ente­ring the dis­com­fi­ting realm of the uncan­ny val­ley. The poten­tial of BCIs to fos­ter syn­chro­ny bet­ween the human brain and AI intro­duces pro­mi­sing yet chal­len­ging pros­pects for human-com­pu­ter collaboration.

Turning to neuroscience

Neu­ros­cience not only illu­mi­nates the basis of bio­lo­gi­cal intel­li­gence but may also guide deve­lop­ment of arti­fi­cial intel­li­gence2. Consi­de­ring evo­lu­tio­na­ry constraints like space and com­mu­ni­ca­tion effi­cien­cy, which have sha­ped the emer­gence of effi­cient sys­tems in nature, prompts explo­ra­tion of embed­ding simi­lar constraints in AI sys­tems, envi­sio­ning orga­ni­cal­ly evol­ving arti­fi­cial envi­ron­ments opti­mi­sed for effi­cien­cy and envi­ron­men­tal sus­tai­na­bi­li­ty, the focus of research in so-cal­led “neu­ro­mor­phic computing.” 

For example, oscil­la­to­ry neu­ral acti­vi­ty appears to boost com­mu­ni­ca­tion bet­ween dis­tant brain areas. The brain employs a the­ta-gam­ma rhythm to package and trans­mit infor­ma­tion, simi­lar to a pos­tal ser­vice, the­re­by enhan­cing effi­cient data trans­mis­sion and retrie­val3. This inter­play has been like­ned to an advan­ced data trans­mis­sion sys­tem, where low-fre­quen­cy alpha and beta brain waves sup­press neu­ral acti­vi­ty asso­cia­ted with pre­dic­table sti­mu­li, allo­wing neu­rons in sen­so­ry regions to high­light unex­pec­ted sti­mu­li via higher-fre­quen­cy gam­ma waves. Bas­tos et al.4 found that inhi­bi­to­ry pre­dic­tions car­ried by alpha/beta waves typi­cal­ly flow back­ward through dee­per cor­ti­cal layers, while exci­ta­to­ry gam­ma waves conveying infor­ma­tion about novel sti­mu­li pro­pa­gate for­ward through super­fi­cial layers.

Recent AI expe­ri­ments, par­ti­cu­lar­ly those invol­ving OpenAI’s GPT‑4, unveil intri­guing paral­lels with evo­lu­tio­na­ry learning.

In the mam­ma­lian brain, sharp wave ripples (SPW-Rs) exert wides­pread exci­ta­to­ry influence throu­ghout the cor­tex and mul­tiple sub­cor­ti­cal nuclei5. Within these SPW-Rs, neu­ro­nal spi­king is meti­cu­lous­ly orches­tra­ted both tem­po­ral­ly and spa­tial­ly by inter­neu­rons, faci­li­ta­ting the conden­sed reac­ti­va­tion of seg­ments from waking neu­ro­nal sequences6. This orches­tra­ted acti­vi­ty aids in the trans­mis­sion of com­pres­sed hip­po­cam­pal repre­sen­ta­tions to dis­tri­bu­ted cir­cuits, the­re­by rein­for­cing the pro­cess of memo­ry conso­li­da­tion7.

Recent AI expe­ri­ments, par­ti­cu­lar­ly those invol­ving OpenAI’s GPT‑4, unveil intri­guing paral­lels with evo­lu­tio­na­ry lear­ning. Unlike tra­di­tio­nal task-orien­ted trai­ning, GPT‑4 learns from exten­sive data­sets, refi­ning its res­ponses based on the accu­mu­la­ted ‘expe­riences’ – fur­ther­more pat­tern recog­ni­tion by GPTs paral­lels pat­tern recog­ni­tion by layers of neu­rons in the brain. This approach mir­rors the adap­ta­bi­li­ty obser­ved in natu­ral evo­lu­tion, where orga­nisms refine their beha­viours over time to bet­ter reso­nate with their environment.

From Brain Waves to AI Frequencies

Dra­wing ins­pi­ra­tion from the archi­tec­ture of the brain, neu­ral net­works in AI are construc­ted with nodes orga­ni­sed in layers that respond to inputs and then gene­rate out­puts. In the realm of human neu­ral syn­chro­ny research, inves­ti­ga­ting the role of oscil­la­tions has pro­ven to be a pivo­tal area of inter­est. High-fre­quen­cy oscil­la­to­ry neu­ral acti­vi­ty stands out as a cru­cial ele­ment, demons­tra­ting its abi­li­ty to faci­li­tate com­mu­ni­ca­tion bet­ween dis­tant brain areas. A par­ti­cu­lar­ly intri­guing phe­no­me­non in this context is the the­ta-gam­ma neu­ral code, show­ca­sing how our brains employ a dis­tinc­tive method of ‘packa­ging’ and ‘trans­mit­ting’ infor­ma­tion, remi­nis­cent of a pos­tal ser­vice meti­cu­lous­ly wrap­ping packages for effi­cient deli­ve­ry. This neu­ral ‘packa­ging’ sys­tem orches­trates spe­ci­fic rhythms, akin to a coor­di­na­ted dance, ensu­ring the stream­li­ned trans­mis­sion of infor­ma­tion, and it is encap­su­la­ted in what is known as the the­ta-gam­ma rhythm.

This pers­pec­tive ali­gns with the concept of “neu­ro­mor­phic com­pu­ting,” where AI archi­tec­ture is based on neu­ral cir­cui­try. The key advan­tage of neu­ro­mor­phic com­pu­ting lies in its com­pu­ta­tio­nal effi­cien­cy, addres­sing the signi­fi­cant ener­gy consump­tion chal­lenges faced by tra­di­tio­nal AI models. The trai­ning of large AI models, such as those used in natu­ral lan­guage pro­ces­sing or image recog­ni­tion, can consume an exor­bi­tant amount of ener­gy. For ins­tance, trai­ning a single AI model can emit as much car­bon dioxide as five cars over their entire lifes­pan8. Moreo­ver, resear­chers at the Uni­ver­si­ty of Mas­sa­chu­setts, Amherst, found that the car­bon foot­print of trai­ning deep lear­ning models has been dou­bling approxi­ma­te­ly eve­ry 3.5 months, far out­pa­cing impro­ve­ments in com­pu­ta­tio­nal effi­cien­cy9.

Neu­ro­mor­phic com­pu­ting offers a pro­mi­sing alter­na­tive. By mimi­cking the archi­tec­ture of the human brain, neu­ro­mor­phic sys­tems aim to achieve higher com­pu­ta­tio­nal effi­cien­cy and lower ener­gy consump­tion com­pa­red to conven­tio­nal AI archi­tec­tures10. For example, IBM’s True­North neu­ro­mor­phic chip has demons­tra­ted signi­fi­cant orders of magni­tude in ener­gy effi­cien­cy com­pa­red to tra­di­tio­nal CPUs and GPUs11. Addi­tio­nal­ly, neu­ro­mor­phic com­pu­ting archi­tec­tures are inhe­rent­ly sui­ted for low-power, real-time pro­ces­sing tasks, making them ideal for appli­ca­tions like edge com­pu­ting and auto­no­mous sys­tems, fur­ther contri­bu­ting to ener­gy savings and envi­ron­men­tal sustainability.

Implications for society

In the realm of trai­ning and skill deve­lop­ment, syn­chro­ni­sed AI has the poten­tial to per­so­na­lise lear­ning expe­riences based on an employee’s unique lear­ning curve, faci­li­ta­ting fas­ter and more effec­tive skill acqui­si­tion. From a cus­to­mer enga­ge­ment stand­point, syn­chro­ni­sed AI inter­faces might more pre­ci­se­ly unders­tand and, in some cases, anti­ci­pate user expec­ta­tions based on advan­ced beha­viou­ral patterns.

For ope­ra­tio­nal effi­cien­cy, espe­cial­ly in sec­tors like manu­fac­tu­ring or logis­tics, AI sys­tems wor­king in coor­di­na­tion with each other can opti­mise pro­cesses, reduce waste, and streng­then the sup­ply chain. This would lead to increa­sed pro­fi­ta­bi­li­ty, with an ever-met grea­ter abi­li­ty for sus­tai­na­bi­li­ty consi­de­ra­tions inte­gra­ted. In risk mana­ge­ment, syn­chro­ni­sed AI sys­tems ana­ly­sing vast data­sets col­la­bo­ra­ti­ve­ly might bet­ter pre­dict poten­tial risks or mar­ket down­turns, equip­ping busi­nesses and other orga­ni­sa­tions to pre­pare or pivot before a cri­sis emerges to limit all rela­ted social and socie­tal impact. Like­wise, syn­chro­ni­sed AI sys­tems could pro­vide insights for more effi­cient urban plan­ning and envi­ron­men­tal pro­tec­tion stra­te­gies. This could lead to bet­ter traf­fic mana­ge­ment, ener­gy conser­va­tion, and pol­lu­tion control, enhan­cing the qua­li­ty of life in urban areas.

In various domains beyond busi­ness, deploy­ment of AI with a pro­so­cial orien­ta­tion holds immense poten­tial for the well-being of huma­ni­ty and the pla­net. Par­ti­cu­lar­ly in heal­th­care, syn­chro­ni­sa­tion bet­ween the human brain and AI sys­tems could usher in a revo­lu­tio­na­ry era for patient care and medi­cal research. Recent stu­dies high­light the posi­tive impact of cli­ni­cians syn­chro­ni­sing their move­ments with patients, the­re­by increa­sing trust, and redu­cing pain. Exten­ding this concept to AI chat­bots or AI-enabled robo­tic care­gi­vers that are syn­chro­ni­sed with those under their ‘care’ holds the pro­mise of enhan­cing patient expe­rience and impro­ving out­comes, as evi­den­ced by recent research indi­ca­ting that LLMs out­per­for­med phy­si­cians in diag­no­sing ill­nesses, and patients pre­fer­red their interaction.

In the edu­ca­tio­nal domain, the inte­gra­tion of AI sys­tems with a focus on syn­chro­ny is equal­ly pro­mi­sing. Research demons­tra­ted that syn­chro­ni­zed brain waves in high school class­rooms were pre­dic­tive of higher per­for­mance and hap­pi­ness among stu­dents12. This stu­dy unders­cores the signi­fi­cance of neu­ral syn­chro­ny in the lear­ning envi­ron­ment. By leve­ra­ging AI tuto­ring sys­tems capable of detec­ting and respon­ding to stu­dents’ cog­ni­tive states in real-time, edu­ca­tion tech­no­lo­gy can poten­tial­ly repli­cate the posi­tive out­comes obser­ved in syn­chro­ni­sed class­room set­tings.  Incor­po­ra­tion of AI sys­tems that reso­nate with stu­dents’ brain states has the poten­tial to create a more condu­cive and effec­tive lear­ning atmos­phere, opti­mi­zing enga­ge­ment and fos­te­ring posi­tive lear­ning outcomes.

Perspectives and Potential

The exci­te­ment sur­roun­ding the pros­pects of brain-to-machine and machine-to-machine syn­chro­ny brings with it a set of para­mount concerns that neces­si­tate scru­ti­ny and that are all but tech­ni­cal. Data pri­va­cy emerges as a cri­ti­cal appre­hen­sion, given the inti­mate nature of neu­ral infor­ma­tion being pro­ces­sed by these sys­tems. The ethi­cal dimen­sions of such syn­chro­ni­sa­tion, par­ti­cu­lar­ly in the realm of AI deci­sion-making, present com­plex chal­lenges that require care­ful consi­de­ra­tion1314.

Expan­ding on these concerns, two ove­rar­ching issues demand heigh­te­ned atten­tion. First­ly, the pre­ser­va­tion of human auto­no­my stands as a foun­da­tio­nal prin­ciple. As we delve into the era of brain-machine syn­chro­ny, it becomes impe­ra­tive to ensure that indi­vi­duals retain their abi­li­ty to make infor­med choices. Avoi­ding sce­na­rios where indi­vi­duals feel coer­ced or mani­pu­la­ted by tech­no­lo­gy is cru­cial in uphol­ding ethi­cal standards.

Second­ly, the ques­tion of equi­ty in access to these tech­no­lo­gies emerges as a pres­sing mat­ter. Cur­rent­ly, such advan­ced tech­no­lo­gies are often cost­ly and may not be acces­sible to all seg­ments of socie­ty. This raises concerns about exa­cer­ba­ting exis­ting inequa­li­ties15. A sce­na­rio where only cer­tain pri­vi­le­ged groups can har­ness the bene­fits of brain-machine syn­chro­ny might dee­pen socie­tal divides. Moreo­ver, the lack of awa­re­ness about these tech­no­lo­gies fur­ther com­pounds issues of equi­table access16.

The inte­gra­tion of AI with human cog­ni­tion marks the thre­shold of an unpre­ce­den­ted era, where machines not only repli­cate human intel­li­gence but also mir­ror intri­cate beha­viou­ral pat­terns and emo­tions. The poten­tial syn­chro­ni­sa­tion of AI with human intent and emo­tion holds the pro­mise of rede­fi­ning the nature of human-machine col­la­bo­ra­tion and, per­haps, even the essence of the human condi­tion. The out­come of har­mo­ni­sing humans and machines will signi­fi­cant­ly impact huma­ni­ty and the pla­net, contin­gent upon the gui­ding human aspi­ra­tions in this pur­suit, and open oppor­tu­ni­ties for an advan­ced human-cen­te­red AI expe­rience, in a “Fusion Mode”, as coi­ned in the “Arti­fi­cial Inte­gri­ty” concept. This raises a time­less ques­tion, rever­be­ra­ting through the course of human his­to­ry : what do we value, and why ?

A cru­cial point to empha­sise is that the impli­ca­tions of syn­chro­ni­sing humans and machines extend far beyond the realm of AI experts ; it encom­passes eve­ry indi­vi­dual. This unders­cores the neces­si­ty to raise awa­re­ness and engage the public at eve­ry stage of this trans­for­ma­tive jour­ney. As the deve­lop­ment of AI pro­gresses, it is essen­tial to ensure that the ethi­cal, socie­tal, and exis­ten­tial dimen­sions are sha­ped by col­lec­tive values and reflec­tions, avoi­ding uni­la­te­ral deci­sions by Big Tech that may not ali­gn with the broa­der inter­ests of huma­ni­ty. What hap­pens next shapes our indi­vi­dual and col­lec­tive future. Get­ting it right is our sha­red responsibility.

1Picard, R. W. (1995). "Affec­tive Computing.&quot ; MIT Media Labo­ra­to­ry Per­cep­tual Com­pu­ting Sec­tion.
2Ach­ter­berg, J., Akar­ca, D., Strouse, D.J. et al. Spa­tial­ly embed­ded recur­rent neu­ral net­works reveal wides­pread links bet­ween struc­tu­ral and func­tio­nal neu­ros­cience fin­dings. Nature Machine Intel­li­gence 5, 1369–1381 (2023). https://doi.org/10.1038/s42256-023–00748‑9
3Lis­man, J. E., &amp ; Idiart, M. A. (1995). Sto­rage of 7 +/- 2 short-term memo­ries in oscil­la­to­ry sub­cycles. Science, 267(5203), 1512–1515. [DOI : 10.1126/science.7878473]
4Bas­tos, A. M., Lund­q­vist, M., Waite, A. S., &amp ; Mil­ler, E. K. (2020). Layer and rhythm spe­ci­fi­ci­ty for pre­dic­tive rou­ting. Pro­cee­dings of the Natio­nal Aca­de­my of Sciences, 117(49), 31459–31469. [https://​doi​.org/​1​0​.​1​0​7​3​/​p​n​a​s​.​2​0​1​4​8​68117]
5Buzsá­ki G. (2015). Hip­po­cam­pal sharp wave-ripple : A cog­ni­tive bio­mar­ker for epi­so­dic memo­ry and plan­ning. Hip­po­cam­pus. 2015 Oct;25(10):1073–188. doi : 10.1002/hipo.22488. PMID : 26135716 ; PMCID : PMC4648295.
6O’Neill, J., Boc­ca­ra, C. N., Stel­la, F., Schoe­nen­ber­ger, P., &amp ; Csics­va­ri, J. (2008). Super­fi­cial layers of the medial ento­rhi­nal cor­tex replay inde­pen­dent­ly of the hip­po­cam­pus. Science, 320(5879), 129–133.
7Ego-Sten­gel, V. ; Wil­son, M. A. (2010). Dis­rup­tion of ripple-asso­cia­ted hip­po­cam­pal acti­vi­ty during rest impairs spa­tial lear­ning in the rat. Hip­po­cam­pus, 20(1), 1–10.
8Stru­bell, E., Ganesh, A., McCal­lum, A. (2019). Ener­gy and poli­cy consi­de­ra­tions for deep lear­ning in NLP. Pro­cee­dings of the 57th Annual Mee­ting of the Asso­cia­tion for Com­pu­ta­tio­nal Lin­guis­tics, 3645–3650. https://​doi​.org/​1​0​.​1​8​6​5​3​/​v​1​/​P​1​9​-1356
9Schwartz, R., Dodge, J., Smith, N. A., Over­ton, J., &amp ; Var­sh­ney, L. R. (2019). Green AI. Pro­cee­dings of the AAAI Confe­rence on Arti­fi­cial Intel­li­gence, 33, 9342–9350. https://​doi​.org/​1​0​.​1​6​0​9​/​a​a​a​i​.​v​3​3​i​0​1​.​3​3​0​19342
10Fur­ber, S. B., Gal­lup­pi, F., Temple, S., Pla­na, L. A. (2014). The SpiN­Na­ker Pro­ject. Pro­cee­dings of the IEEE, 102(5), 652–665. https://​doi​.org/​1​0​.​1​1​0​9​/​J​P​R​O​C​.​2​0​1​4​.​2​3​04638
11Merol­la, P. A., Arthur, J. V., Alva­rez-Ica­za, R., Cas­si­dy, A. S., Sawa­da, J., Ako­pyan, F., … Mod­ha, D. S. (2014). A mil­lion spi­king-neu­ron inte­gra­ted cir­cuit with a sca­lable com­mu­ni­ca­tion net­work and inter­face. Science, 345(6197), 668–673. https://​doi​.org/​1​0​.​1​1​2​6​/​s​c​i​e​n​c​e​.​1​2​54642
12Dik­ker, S., Wan, L., Davi­des­co, I., Kag­gen, L., Oos­trik, M., McClin­tock, J., … &amp ; Poep­pel, D. (2017). Brain-to- brain syn­chro­ny tracks real-world dyna­mic group inter­ac­tions in the class­room. Cur­rent Bio­lo­gy, 27(9), 1375–1380.
13Dignum, V. (2018). Res­pon­sible Arti­fi­cial Intel­li­gence : How to Deve­lop and Use AI in a Res­pon­sible Way. AI &amp ; Socie­ty, 33(3), 475–476. https://doi.org/10.1007/s00146-018‑0812‑0
14Flo­ri­di, L., Cowls, J., Bel­tra­met­ti, M., Cha­ti­la, R., Cha­ze­rand, P., Dignum, V., Luetge, C., Made­lin, R.,Pagallo, U., Ros­si, F., Scha­fer, B., Valcke, P., &amp ; Vaye­na, E. (2018). AI4People—An Ethi­cal Fra­me­work for a Good AI Socie­ty : Oppor­tu­ni­ties, Risks, Prin­ciples, and Recom­men­da­tions. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018‑9482‑5
15Dia­ko­pou­los, N. (2016). Accoun­ta­bi­li­ty in Algo­rith­mic Deci­sion Making. Com­mu­ni­ca­tions of the ACM, 59(2), 56–62. https://​doi​.org/​1​0​.​1​1​4​5​/​2​8​44148
16Kost­ko­va, P., Bre­wer, H., de Lusi­gnan, S., Fot­trell, E., Gol­dacre, B., Hart, G., Koc­zan, P., Knight, P., Mar­so­lier, C., McKen­dry, R. A., Ross, E., Sasse, A., Sul­li­van, R., Chay­tor, S., Ste­ven­son, O., Vel­ho, R., Tooke, J., &amp ; Ross, E. (2016). Who Owns the Data ? Open Data for Heal­th­care. Fron­tiers in Public Health, 4. https://​doi​.org/​1​0​.​3​3​8​9​/​f​p​u​b​h​.​2​0​1​6​.​00107

Support accurate information rooted in the scientific method.

Donate