Home / Chroniques / How gamification can build cognitive resilience to complex crisis situations
Généré par l'IA / Generated using AI
π Science and technology π Geopolitics

How gamification can build cognitive resilience to complex crisis situations

Jean LANGLOIS-BERTHELOT
Jean Langlois-Berthelot
Doctor of Applied Mathematics and Head of Division in the French Army
Christophe Gaie
Christophe Gaie
Head of the Engineering and Digital Innovation Division at the Prime Minister's Office
Key takeaways
  • Cognitive resilience is the ability to keep a cool head, adapt and make better decisions in stressful situations and when faced with information overload.
  • Traditional methods of training cognitive resilience have shown their limitations, and these tools need to be modernised to better prepare people.
  • To do this, simulation and gamification enable more dynamic, interactive experiences that are closer to the current complex conditions of crises.
  • Gamification provides a framework that stimulates cognitive complexity, forcing decisions to be made in uncertain environments.
  • However, the development of these technologies raises ethical and deontological questions, such as data protection and the use of data without manipulative or conditioning abuses.

Faced with the accel­er­a­tion of increas­ingly com­plex and uncer­tain inter­na­tion­al crises, the issue of cog­nit­ive resi­li­ence has become cent­ral. This concept, which refers to the abil­ity to keep a cool head, adapt and make the best decisions in situ­ations of stress, inform­a­tion over­load and ambi­gu­ity, is now a major chal­lenge for both mil­it­ary and civil­ian decision-makers.

Unfor­tu­nately, tra­di­tion­al train­ing meth­ods based on fixed pro­ced­ures and rel­at­ively pre­dict­able scen­ari­os are reveal­ing their lim­it­a­tions. They do not suf­fi­ciently reflect the chan­ging, some­times chaot­ic real­ity in which act­ors must oper­ate today. This is where sim­u­la­tion and gami­fic­a­tion come into play, offer­ing exper­i­ences that are more dynam­ic, inter­act­ive and, above all, closer to the real com­plex­ity of crises1.

The inadequacy of traditional simulations

Crisis man­age­ment train­ing has relied on sim­u­lat­ors designed to repro­duce fairly lin­ear situ­ations where out­comes are rel­at­ively pre­dict­able. This approach is effect­ive when it comes to learn­ing pro­ced­ures or reflexes, but proves insuf­fi­cient for pre­par­ing for hybrid crises, cyber crises, or asym­met­ric con­flicts where inform­a­tion is incom­plete, unclear, or even con­tra­dict­ory. What should be done if attacks sim­ul­tan­eously hit sev­er­al vital infra­struc­tures such as the energy2, tele­com­mu­nic­a­tions3 and inform­a­tion4 sectors?

Under these con­di­tions, decision-mak­ing becomes a real cog­nit­ive chal­lenge, influ­enced by men­tal load and emo­tions, as well as some­times uncon­scious biases5. These essen­tial human aspects are too often absent from tra­di­tion­al train­ing, which lim­its its effect­ive­ness in unpre­dict­able situ­ations6. It is there­fore neces­sary to mod­ern­ise the tools at our dis­pos­al in order to be bet­ter prepared.

Major powers at the forefront of cognitive technologies

To meet these chal­lenges, sev­er­al coun­tries have developed innov­at­ive solu­tions. In the United States, for example, DARPA (Defense Advanced Research Pro­jects Agency, a mil­it­ary tech­no­logy research and devel­op­ment organ­isa­tion) is focus­ing on immers­ive sim­u­lat­ors that incor­por­ate bio­met­ric sensors to meas­ure oper­at­ors’ cog­nit­ive load and emo­tion­al state in real time. This makes it pos­sible to auto­mat­ic­ally adjust the dif­fi­culty of the exer­cises and offer per­son­al­ised train­ing7.

In China, the approach is even more ambi­tious, with the devel­op­ment of brain-machine inter­faces aimed at dir­ectly increas­ing cog­nit­ive abil­it­ies, such as alert­ness or memory, in highly com­plex sim­u­lated envir­on­ments8. In Europe, a more inter­dis­cip­lin­ary approach is being taken, com­bin­ing arti­fi­cial intel­li­gence, cog­nit­ive sci­ence and social ana­lys­is to bet­ter mod­el human decision-mak­ing in crisis situ­ations9. How­ever, the integ­ra­tion of this research into train­ing sys­tems has yet to become widespread.

The devel­op­ment of these tech­no­lo­gies raises eth­ic­al and pro­fes­sion­al con­duct issues, such as data use and protection.

Nev­er­the­less, the devel­op­ment of these tech­no­lo­gies raises cru­cial eth­ic­al and deont­o­lo­gic­al ques­tions. It is imper­at­ive to guar­an­tee data pro­tec­tion and ensure that these cog­nit­ive aug­ment­a­tion tools are used with­in a strict eth­ic­al frame­work, without manip­u­lat­ive or con­di­tion­ing bias and in the interests of cit­izens, i.e. not exclus­ively for com­mer­cial or mil­it­ary pur­poses10.

Gamification: more than just a fun tool

Gami­fic­a­tion is some­times seen as a way to make learn­ing more fun, but its poten­tial goes far bey­ond that. When well designed, it provides a frame­work that sim­u­lates cog­nit­ive com­plex­ity, for­cing par­ti­cipants to make decisions in uncer­tain envir­on­ments, with con­sequences that can be unpre­dict­able11. These ser­i­ous games, par­tic­u­larly those offer­ing mul­tiple-branch scen­ari­os, are effect­ive train­ing grounds for devel­op­ing adapt­ab­il­ity, stress man­age­ment and decision-mak­ing under pres­sure12. In cyber crisis train­ing, for example, their effect­ive­ness has been con­firmed by sev­er­al stud­ies13.

Fur­ther­more, the integ­ra­tion of gen­er­at­ive arti­fi­cial intel­li­gence into the design of these ser­i­ous games paves the way for even more dynam­ic scen­ari­os, where AI can gen­er­ate events, char­ac­ters or twists in real time, mak­ing each train­ing ses­sion unique and requir­ing increased cog­nit­ive adapt­ab­il­ity14.

French innovations: discreet but concrete work

Between 2022 and 2024, SRAT (Sys­tem Risk Assess­ment and Tech­no­logy) car­ried out sev­er­al pro­jects that per­fectly illus­trate this dynam­ic. The team developed a sim­u­lat­or based on the Unity engine, recre­at­ing a crisis man­age­ment centre in a con­flict zone where users’ decisions influ­ence the course of events, thus intro­du­cing the notion of uncer­tainty and unpre­dict­ab­il­ity. Coupled with an elec­tro­en­ceph­al­o­graphy (EEG) inter­face, this sim­u­lat­or meas­ures users’ cog­nit­ive load and emo­tions in real time, provid­ing valu­able feed­back on their decision-mak­ing mech­an­isms. At the same time, SRAT worked on the C‑RAND pro­ject, a bench­mark for emer­ging tech­no­lo­gies in cyber-cog­nit­ive inter­faces, in col­lab­or­a­tion with the Brit­ish Army. This work enabled the selec­tion of the most suit­able solu­tions for integ­ra­tion into train­ing and oper­a­tions sys­tems15.

Anoth­er not­able pro­ject is the devel­op­ment of a digit­al ser­i­ous game in HTML, used to train both Master’s stu­dents and officers at the École Milit­aire Inter­armes. This game sim­u­lates a cyber crisis with mul­tiple rami­fic­a­tions, for­cing play­ers to man­age inform­a­tion over­load and make quick and rel­ev­ant decisions. Feed­back shows a sig­ni­fic­ant improve­ment in adapt­ab­il­ity and stress man­age­ment skills16. Finally, SRAT has provided more than 100 hours of teach­ing; com­bin­ing arti­fi­cial intel­li­gence, cog­nit­ive sci­ence and strategy with­in the High­er Mil­it­ary Sci­entif­ic and Tech­nic­al Edu­ca­tion (EMSST), thus pre­par­ing future exec­ut­ives for the cog­nit­ive chal­lenges of con­tem­por­ary crises.

Sim­u­la­tion and gami­fic­a­tion are stra­tegic tools that can trans­form train­ing and increase the cog­nit­ive resi­li­ence of decision-makers.

To max­im­ise the impact of these tools on learner train­ing, the role of debrief­ing and struc­tured feed­back is cru­cial. Post-sim­u­la­tion ana­lys­is, where the data col­lec­ted (cog­nit­ive load, emo­tions, decisions) is used for per­son­al­ised feed­back, enables par­ti­cipants to under­stand their biases and integ­rate their learn­ing in a last­ing way.

Towards enhanced cognitive sovereignty

These dif­fer­ent approaches clearly show that sim­u­la­tion and gami­fic­a­tion are not mere gad­gets, but stra­tegic tools cap­able of trans­form­ing train­ing and increas­ing the cog­nit­ive resi­li­ence of decision-makers. In a world where inform­a­tion is a lever of power and where the speed of decision-mak­ing can make the dif­fer­ence between suc­cess and fail­ure, it is essen­tial to strengthen this abil­ity to adapt and make decisions under stress17. Cog­nit­ive sov­er­eignty, like tech­no­lo­gic­al sov­er­eignty, is now a mat­ter of nation­al security.

It is also import­ant to con­sider that cog­nit­ive resi­li­ence is not solely the pre­serve of decision-makers; an approach open to every cit­izen would raise aware­ness and pre­pare a wider audi­ence for the cog­nit­ive chal­lenges of con­tem­por­ary crises (dis­in­form­a­tion, rumours, emer­gen­cies). In the cur­rent con­text of hybrid attacks aimed at spread­ing dis­in­form­a­tion, stir­ring up hatred and break­ing nation­al cohe­sion, rais­ing aware­ness is a power­ful lever of res­ist­ance that needs to be developed.

But it is not just about tech­no­logy. Cog­nit­ive resi­li­ence is a com­plex pro­cess that requires an integ­rated approach, com­bin­ing tech­no­logy, human under­stand­ing and appro­pri­ate edu­ca­tion. The work of SRAT in France is lead­ing the way, with a prag­mat­ic, rig­or­ous approach that is closely aligned with the real needs of the armed forces. The chal­lenge now is to ensure the wider dis­sem­in­a­tion and adop­tion of these innov­a­tions, so that cog­nit­ive train­ing becomes an oper­a­tion­al real­ity and an asset in the man­age­ment of tomor­row’s crises.

Sim­u­la­tion and gami­fic­a­tion open up excit­ing pro­spects for meet­ing the chal­lenge of cog­nit­ive resi­li­ence. More than ever, these tools must be seen as essen­tial levers for pre­par­ing decision-makers for the com­plex and uncer­tain real­it­ies of mod­ern crises. Com­bin­ing tech­no­lo­gic­al advances with a human-centred approach is the key to build­ing true cog­nit­ive sov­er­eignty, guar­an­tee­ing effi­ciency and secur­ity in a rap­idly chan­ging world.

1Salas, E., Bowers, C. A., & Rhoden­izer, L. (2017). It is not how much you prac­tice, but how you prac­tice: Toward a new paradigm in train­ing research. Psy­cho­lo­gic­al Sci­ence, 8(4), 270–276.
2IT-Pro. (2025, April 9). Men­aces cyber sur le sec­teur éner­gétique européen ! iTPro​.fr. https://​www​.itpro​.fr/​m​e​n​a​c​e​s​-​c​y​b​e​r​-​s​u​r​-​l​e​-​s​e​c​t​e​u​r​-​e​n​e​r​g​e​t​i​q​u​e​-​e​u​r​o​peen/
3Les Echos (2024, Novem­ber 18). Câble télé­com rompu : Ber­lin et Hel­sinki évoquent la « guerre hybride » et la men­ace russe. Les Echos. https://​www​.lesechos​.fr/​m​o​n​d​e​/​e​n​j​e​u​x​-​i​n​t​e​r​n​a​t​i​o​n​a​u​x​/​c​a​b​l​e​-​t​e​l​e​c​o​m​-​r​o​m​p​u​-​b​e​r​l​i​n​-​e​t​-​h​e​l​s​i​n​k​i​-​e​v​o​q​u​e​n​t​-​l​a​-​g​u​e​r​r​e​-​h​y​b​r​i​d​e​-​e​t​-​l​a​-​m​e​n​a​c​e​-​r​u​s​s​e​-​2​1​32308
4Mar­tin, O. (2024, July 25). Cam­pagnes de désin­form­a­tion, cyber­at­taques, ingérences… le com­bat hybride a déjà com­mencé, à nous d’y faire face. Le Figaro. https://​www​.lef​igaro​.fr/​v​o​x​/​m​o​n​d​e​/​l​e​-​c​o​m​b​a​t​-​h​y​b​r​i​d​e​-​a​-​d​e​j​a​-​c​o​m​m​e​n​c​e​-​a​-​n​o​u​s​-​d​-​y​-​f​a​i​r​e​-​f​a​c​e​-​2​0​2​40725
5Kahne­man, D. (2011). Think­ing, fast and slow. Far­rar, Straus and Giroux.
6Para­s­ura­man, R., & Riley, V. (1997). Humans and auto­ma­tion: Use, mis­use, dis­use, abuse. Human Factors, 39(2), 230–253.
7Gonza­lez, C., & al. (2021). Adapt­ive cog­nit­ive work­load assess­ment using EEG and eye track­ing in crisis sim­u­la­tions. NeuroIm­age, 236, 118070.
8Wang, Y., Li, X., & Zhang, Y. (2022). Brain-com­puter inter­faces for enhan­cing cog­nit­ive per­form­ance: A review. Fron­ti­ers in Neur­os­cience, 16, 987654.
9Bour­ri­er, M., & Bénaben, F. (2021). Cog­nit­ive sys­tems and crisis man­age­ment: An integ­rat­ive frame­work. Cog­ni­tion, Tech­no­logy & Work, 23(1), 15–29.
10Shults, F.L., Wild­man, W.J. (2019). Eth­ics, Com­puter Sim­u­la­tion, and the Future of Human­ity. In: Diallo, S., Wild­man, W., Shults, F., Tolk, A. (eds) Human Sim­u­la­tion: Per­spect­ives, Insights, and Applic­a­tions. New Approaches to the Sci­entif­ic Study of Reli­gion, vol 7. Spring­er, Cham. https://doi.org/10.1007/978–3‑030–17090-5_2
11Michael, D., & Chen, S. (2006). Ser­i­ous games: Games that edu­cate, train, and inform. Muska & Lip­man/­Premi­er-Trade.
12Con­nolly, T. M., Boyle, E. A., MacAr­thur, E., Hainey, T., & Boyle, J. M. (2012). A sys­tem­at­ic lit­er­at­ure review of empir­ic­al evid­ence on com­puter games and ser­i­ous games. Com­puters & Edu­ca­tion, 59(2), 661–686.
13Ander­son, J., & Rain­ie, L. (2021). The future of crisis sim­u­la­tion and train­ing. Journ­al of Cog­nit­ive Engin­eer­ing and Decision Mak­ing, 15(2), 90–108.
14
Eun S‑J., Eun J. K., Kim, JY., Arti­fi­cial intel­li­gence-based per­son­al­ized ser­i­ous game for enhan­cing the phys­ic­al and cog­nit­ive abil­it­ies of the eld­erly, Future Gen­er­a­tion Com­puter Sys­tems, Volume 141, 2023, Pages 713–722, ISSN 0167–739X,
https://​doi​.org/​1​0​.​1​0​1​6​/​j​.​f​u​t​u​r​e​.​2​0​2​2​.​1​2.017
15
Lan­glois-Ber­th­el­ot, L. (2024). Syn­thèse des innov­a­tions en inter­faces cyber-cog­nit­ives pour l’Armée de Terre. SRAT
16
Lan­glois-Ber­th­el­ot, L. (2023). Rap­port interne sur la sim­u­la­tion et la gami­fic­a­tion dans la ges­tion de crise.
SRAT
17Woods, D. D., & Hollna­gel, E. (2019). Resi­li­ence engin­eer­ing: Con­cepts and pre­cepts. CRC Press.

Support accurate information rooted in the scientific method.

Donate