Home / Columns / Alzheimer’s, Parkinson’s: “tommorrow, AI will detect disease”
Mounim El Yacoubi
π Health and biotech π Science and technology

Alzheimer’s, Parkinson’s: “tommorrow, AI will detect disease”

Mounîm A. El Yacoubi, Professor at Télécom SudParis (IP Paris)

AI and machine learn­ing are already used today to help diag­nose patients. How can they be useful?

Mounim El Yacoubi. First of all, it must be stressed that diag­no­sis is not just a mat­ter of sort­ing out patients. There is no clear line between that which is “nor­mal” and that which is “patho­log­i­cal”. This is why doc­tors remain in charge of their diag­noses, and why machine learn­ing solu­tions only exist as aids intend­ed not to replace them but to help them pri­ori­tise patients.

Hence, today, machine learn­ing has a con­tri­bu­tion to make in med­ical diag­noses, par­tic­u­lar­ly con­cern­ing the detec­tion of anom­alies in MRIs. This type of method is based on super­vised learn­ing using mil­lions of images, in which the sys­tems are able to detect anom­alies, with very high clas­si­fi­ca­tion rates – some­times in even fin­er detail than those of doctors.

So, you are say­ing that AI can be used to go beyond cur­rent health testing?

Yes, it can. Tra­di­tion­al diag­nos­tic meth­ods, which rely on blood tests, med­ical imag­ing or the mea­sure­ment of oth­er bio­log­i­cal para­me­ters, try to iden­ti­fy an anom­aly or the char­ac­ter­is­tic symp­toms of a pathol­o­gy. They work fair­ly well but are not per­fect because they are often inva­sive and cost­ly in terms of equip­ment and per­son­nel. Patients also have to come to the hos­pi­tal or med­ical lab­o­ra­to­ry. For all these rea­sons, diag­nos­tic tools based on machine learn­ing, on data from inex­pen­sive and non-inva­sive sen­sors, are of inter­est to the med­ical community.

You are work­ing on tech­niques using data that goes beyond tra­di­tion­al med­ical testing?

We work on so-called “eco­log­i­cal data”, such as hand­writ­ing, gait or voice. For Parkin­son’s dis­ease, we are con­duct­ing a Euro­pean research project in col­lab­o­ra­tion with the Insti­tut du Cerveau et de la Moelle épinière in France. The aim is to be able to detect abnor­mal­i­ties in a patient’s voice and facial expres­sions – that are char­ac­ter­is­tics of the dis­ease – dur­ing a sim­ple video call.

Peo­ple suf­fer­ing from this neu­rode­gen­er­a­tive dis­or­der gen­er­al­ly show hypomimia, i.e., a reduc­tion in the ampli­tude of expres­sive move­ments, or voice alter­ations. We are devel­op­ing a machine learn­ing method to auto­mat­i­cal­ly detect these sig­nals, and we aim to com­pare these results with MRI data or oth­er clin­i­cal indi­ca­tors. We hope that our approach can help to bet­ter char­ac­terise patients and strat­i­fy the dis­ease; mean­ing that we will iden­ti­fy cri­te­ria for detect­ing groups of Parkinson’s patients with dif­fer­ent behav­iours, who could there­fore be treat­ed by doc­tors with dif­fer­ent treat­ments and therapies.

With a tool like this, a first diag­nos­tic step could be made with­out even need­ing to bring the patient into the med­ical centre!

Will they be able to use data that is cur­rent­ly imper­cep­ti­ble to doctors?

In the­o­ry, the doc­tor could detect these signs, but in prac­tice it is very com­pli­cat­ed, because you would have to com­pare how facial expres­sions evolved over sev­er­al months. We devel­oped a sim­i­lar approach for Alzheimer’s dis­ease, in col­lab­o­ra­tion with the Bro­ca Hos­pi­tal in Paris. The aim was to iden­ti­fy the dete­ri­o­ra­tion in hand­writ­ing, voice and walk­ing attrib­ut­able to the dis­ease. For this work on neu­rode­gen­er­a­tive dis­eases, the chal­lenge is to rec­on­cile speci­fici­ty and sen­si­tiv­i­ty. We want to be able to iden­ti­fy patients with ear­ly forms with­out con­fus­ing them with oth­er neu­ro­log­i­cal dis­or­ders, such as mild cog­ni­tive impair­ment or oth­er patholo­gies. It’s very tricky.

Can con­nect­ed devices help you deploy these approaches?

For type‑2 dia­betes, we use con­nect­ed blood glu­cose sen­sors. They allow us to read blood glu­cose lev­els con­tin­u­ous­ly; we don’t need to ask patients to prick them­selves and col­lect mea­sure­ments 24-hours a day. We com­bine this data with infor­ma­tion on meals and insulin intake, which the patient can give us via a dia­betes track­ing appli­ca­tion on a smart­phone, and their phys­i­cal activ­i­ty, which is record­ed via a con­nect­ed bracelet. By com­bin­ing this infor­ma­tion, we can pre­dict the blood sug­ar level.

This is a real chal­lenge because each per­son has his or her own metab­o­lism, his or her own genet­ics… We have there­fore cre­at­ed per­son­alised mod­els based on ‘sequen­tial deep learn­ing’ mod­els. This work was the sub­ject of a the­sis by Maxime de Bois, which I co-direct­ed with Meh­di Ammi from the Uni­ver­si­ty of Paris-Saclay. Maxime devel­oped his tech­nique on a syn­thet­ic patient base, val­i­dat­ed by the FDA, the Amer­i­can reg­u­la­to­ry author­i­ty. He then test­ed it on 6 patients in col­lab­o­ra­tion with the Reves­diab network.

Did you encounter any difficulties?

Yes, sev­er­al, but we were able to resolve them. To over­come the lack of data, we use a trans­fer learn­ing method, which allows us to pre-train the mod­el from oth­er patients, ensur­ing that it gen­er­ates the most gen­er­al para­me­ters pos­si­ble, and there­fore the most adapt­able to a new patient. To improve the accept­abil­i­ty of the sys­tem to doc­tors, we have tak­en into account the dif­fer­ences in pre­dic­tions in our choice of metrics.

To explain how our mod­el works, we inte­grat­ed lay­ers into our deep neur­al net­work (the learn­ing method) to esti­mate the weight of each vari­able over time. For each pre­dic­tion, we are thus able to indi­cate, at each point in time, which vari­able (blood sug­ar, food or insulin) was deci­sive. This is also a very inter­est­ing aspect because the doc­tors them­selves do not know which para­me­ter is greater at a giv­en moment.

Is this your only project with con­nect­ed objects?

No, we also have a project to improve the diag­no­sis of car­diac arrhyth­mia using a con­nect­ed bracelet that mea­sures arte­r­i­al stiff­ness. Here too, we will com­pare our results with those obtained with electrocardiograms.

Do you think that, in the future, our con­nect­ed fridge will be able to alert us to a risk of depres­sive behaviour?

It is indeed a good object to spot changes in habits… One can imag­ine that these data could be cor­re­lat­ed with those of a smart­phone or with the nature and activ­i­ties on the web­sites vis­it­ed. This will raise a major data pro­tec­tion issue. Will we allow our doc­tor to con­sult the analy­ses from our fridge? Will our search engine or social net­works warn us if our behav­iour changes in a dan­ger­ous way? One imag­ines that peo­ple with chron­ic patholo­gies and who expe­ri­ence phase changes, such as dia­bet­ics or suf­fer­ers of bipo­lar dis­or­der, would be more like­ly to give informed con­sent to this type of approach.

Interview by Agnès Vernet

For fur­ther information:

  • DIGIPD : Val­i­dat­ing DIG­I­tal bio­mark­ers for bet­ter per­son­al­ized treat­ment of Parkinson’s Dis­ease, https://​www​.erap​ermed​.eu/​w​p​-​c​o​n​t​e​n​t​/​u​p​l​o​a​d​s​/​2​0​2​1​/​0​1​/​N​e​w​s​l​e​t​t​e​r​-​E​R​A​-​P​e​r​M​e​d​_​f​i​n​a​l.pdf, 2021.
  • Maxime De Bois, Mounim A. El-Yacoubi, Meh­di Ammi, “Adver­sar­i­al mul­ti-source trans­fer learn­ing in health­care: Appli­ca­tion to glu­cose pre­dic­tion for dia­bet­ic peo­ple,” Com­put­er Meth­ods Pro­grams Bio­med­i­cine, 199: 105874 (2021).
  • Maxime De Bois, Mounim A. El-Yacoubi, Meh­di Ammi, “Enhanc­ing the Inter­pretabil­i­ty of Deep Mod­els in Heath­care Through Atten­tion: Appli­ca­tion to Glu­cose Fore­cast­ing for Dia­bet­ic Peo­ple,” Inter­na­tion­al Jour­nal of Pat­tern Recog­ni­tion and Arti­fi­cial Intel­li­gence, to appear, 2021.
  • Mounîm A. El-Yacoubi, Sonia Gar­cia-Sal­icetti, Chris­t­ian Kahin­do, Anne-Sophie Rigaud, and Vic­to­ria Cristan­cho-Lacroix, « From aging to ear­ly-stage Alzheimer’s: Uncov­er­ing hand­writ­ing mul­ti­modal behav­iors by semi-super­vised learn­ing and sequen­tial rep­re­sen­ta­tion learn­ing, » Pat­tern Recog­ni­tion, Vol. 86, pp. 112–133, 2/2019.
  • Saei­deh Mirza­ei, Mounim El Yacoubi, Sonia Gar­cia-Sal­icetti, Jerome Boudy, C Kahin­do, V Cristan­cho-Lacroix, Hélène Ker­hervé, A‑S Rigaud, “Two-stage fea­ture selec­tion of voice para­me­ters for ear­ly Alzheimer’s dis­ease pre­dic­tion,” IRBM, Vol. 39, No. 6, pp. 430–435, 2018.

Contributors

Mounim El Yacoubi
Mounîm A. El Yacoubi
Professor at Télécom SudParis (IP Paris)

Mounîm A. El Yacoubi's research focuses on AI and machine learning modelling of various real-life phenomena. He is particularly interested in the modelling of biometric data, such as handwriting, gestures, voice, face or walking, for the detection of neurodegenerative diseases or for human-computer interaction.