Home / Chroniques / AI in medical decision-making: a question of ethics?
Enhancing medical cybersecurity with generative ai for data safety in healthcare and life insurance
π Health and biotech π Science and technology

AI in medical decision-making : a question of ethics ?

Damien Lacroux
Damien Lacroux
philosopher of science and researcher at the UNESCO Chair in the Ethics of the Living and the Artificial
Key takeaways
  • Artificial intelligence is gradually becoming an integral part of predictive and personalised medicine, and in helping medical professionals make therapeutic decisions.
  • The aim of the MIRACLE project is to identify the risk of recurrence in lung cancer patients, using an algorithm to aid medical decision-making.
  • To achieve this, the algorithm is fed with a large amount of patient data – the more data there is, the narrower the algorithm’s margin of error.
  • But the more powerful the AI, the more opaque it becomes for practitioners, who have no way of understanding what data has led to the probability of recurrence proposed by the AI.
  • AI therefore raises ethical issues concerning transparency in medicine, where the main fear of patients remains that the machine will impose a diagnosis without human intervention.

Arti­fi­cial intel­li­gence to detect breast can­cer1 or pros­tate can­cer2, algo­rithms that cal­cu­late our phy­sio­lo­gi­cal age to pre­dict ageing3 or conver­sa­tio­nal agents to moni­tor our men­tal health4… Arti­fi­cial intel­li­gence tools are gra­dual­ly beco­ming part of medi­cal prac­tice. The focus is on pre­dic­tive and per­so­na­li­sed medi­cine, as well as the­ra­peu­tic deci­sion-making by the medi­cal pro­fes­sion. But how do patients per­ceive this rela­tion­ship bet­ween doc­tors and AI ? How do prac­ti­tio­ners real­ly inter­act with the technology ?

These are the ques­tions posed by Damien Lacroux, phi­lo­so­pher of science and resear­cher at the UNESCO Chair in the Ethics of the Living and the Arti­fi­cial. “During my inter­views, I noti­ced that patients ima­gine a par­ti­cu­lar rela­tion­ship bet­ween doc­tors and AI,” explains the resear­cher, who spe­cia­lises in the inte­gra­tion of algo­rithms in can­ce­ro­lo­gy. “We tend to believe that human spe­cia­lists in onco­lo­gy deli­be­rate on our case before making a deci­sion, and that the tech­no­lo­gy inter­venes at a later stage to vali­date the deli­be­ra­tion,” he explains. But is this real­ly the case ?

AI to prevent the risk of lung cancer recurrence

To find out, Damien Lacroux spoke to scien­tists from the MIRACLE5 pro­ject. This ambi­tious­ly named Euro­pean stu­dy was laun­ched in 2021 and brings toge­ther labo­ra­to­ries in Ita­ly, Spain, Ger­ma­ny and France. The aim is to iden­ti­fy risk of recur­rence in lung can­cer patients, using an algo­rithm to help medi­cal deci­sion-making. To achieve this, resear­chers are trai­ning AI (machine lear­ning) in a super­vi­sed man­ner. The algo­rithm is “fed” with data from a cohort of patients for whom the exis­tence or absence of recur­rence is known. The data inges­ted is of three types : cli­ni­co-patho­lo­gi­cal data (such as the patient’s sex, the his­to­ry of their disease or the treat­ments they may have under­gone); medi­cal ima­ging data ; and final­ly, omics data, i.e. a mass of infor­ma­tion rela­ting to mole­cu­lar bio­lo­gy (DNA or RNA from tumours).

Using a cohort of 220 patients, the scien­tists feed the algo­rithm with all the data col­lec­ted, as well as infor­ma­tion on whe­ther or not a recur­rence hap­pe­ned – and how long before it does. “Then we let the algo­rithm do its work ! This involves an uni­ma­gi­nable amount of data, which is impos­sible for humans to pro­cess on their own,” explains Damien Lacroux. “Today, the pro­ject is behind sche­dule, and we’ve only just fini­shed col­lec­ting data from the first cohort. We still have to start trai­ning the algo­rithm with this data and then recruit a second cohort to vali­date its trai­ning.” So, we’ll have to wait a lit­tle lon­ger before we see the MIRACLE pro­ject in action.

AI : a black box for medical decision-making

But the way it works imme­dia­te­ly raises an ethi­cal issue, which was poin­ted out by the resear­chers inter­vie­wed by Damien Lacroux. “At the start of trai­ning, bio­in­for­ma­ti­cians manage to slice up the data­sets and asso­ciate the AI results with this or that input fac­tor. But gra­dual­ly, the data increases and it becomes a black box.” This gro­wing volume of data makes the models used to refine pre­dic­tions more com­plex. And the­rein lies the para­dox : as the amount of data increases, the algorithm’s mar­gin for error decreases. The AI is the­re­fore more effi­cient, but the way it works is less clear to prac­ti­tio­ners. How can they explain the deci­sions made by AI to patients, or gua­ran­tee the absence of bias, if they them­selves are not fami­liar with its inner workings ?

In the field of onco­lo­gy, deci­sion trees are often used to help doc­tors jus­ti­fy their cli­ni­cal rea­so­ning. Howe­ver, the inte­gra­tion of algo­rith­mic scores into these pro­cesses can conflict with the need for trans­pa­ren­cy on the part of doc­tors, who some­times struggle to unders­tand what input data has led the AI to esti­mate the pro­ba­bi­li­ty of recur­rence. “Even if we mana­ged to deci­pher eve­ry inter­nal cal­cu­la­tion in the algo­rithm, the result would be so mathe­ma­ti­cal­ly com­plex that doc­tors would not be able to inter­pret it or use it in their cli­ni­cal prac­tice,” explains a Ger­man bio­in­for­ma­ti­cian wor­king on the MIRACLE pro­ject, inter­vie­wed6 by Damien Lacroux in his for­th­co­ming study.

This also affects the notion of infor­med patient consent. “The doc­tor is obli­ged to pro­vide suf­fi­cient infor­ma­tion to enable the patient to accept or refuse treat­ment. Howe­ver, if the prac­ti­tio­ner is not ful­ly infor­med, this poses an ethi­cal pro­blem,” adds the phi­lo­so­pher. And yet, as Damien Lacroux points out in his stu­dy : “Mole­cu­lar bio­lo­gy has iden­ti­fied the need to take into account thou­sands of pieces of patient omic data as an essen­tial means of making pro­gress in onco­lo­gy.” AI would the­re­fore enable bet­ter mana­ge­ment of the poten­tial evo­lu­tion of the disease, by refi­ning the pro­po­sed treat­ments… at the expense of trust bet­ween doc­tors and patients.

The importance of having humans in the driver’s seat

Whe­ther AI is inte­gra­ted into the medi­cal deli­be­ra­tion pro­cess (what Damien Lacroux calls “ana­ly­ti­cal deliberation”in his article) or whe­ther it is total­ly out­side the deci­sion-making pro­cess and only inter­venes as a final consul­ta­tion (“syn­the­tic deli­be­ra­tion”), there must be total trans­pa­ren­cy as far as patients are concer­ned. The main fear rai­sed by the resear­cher during group inter­views with patients7 remains that “the machine” will make the diag­no­sis without human inter­ven­tion. “But this is not at all the case today,” reas­sures Damien Lacroux.

These algo­rith­mic scores, which pro­pose a pro­ba­bi­li­ty of can­cer recur­rence based on patient data, also raise other ques­tions spe­ci­fic to pre­dic­tive medi­cine : when are we real­ly cured ? Can we real­ly be free of the disease when uncer­tain­ty per­sists, and we live in constant anti­ci­pa­tion of a pos­sible recur­rence ? These ques­tions, like so many others, have yet to be answered.

Sophie Podevin
1Ins­ti­tut Curie article, Decem­ber 2022 (https://​curie​.fr/​a​c​t​u​a​l​i​t​e​/​p​u​b​l​i​c​a​t​i​o​n​/​d​i​a​g​n​o​s​t​i​c​-​d​u​-​c​a​n​c​e​r​-​d​u​-​s​e​i​n​-​l​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​d​i​b​e​x​-​b​i​e​n​t​o​t​-​r​e​alite)
2Ins­ti­tut Curie article, Novem­ber 2024 (https://​curie​.fr/​a​c​t​u​a​l​i​t​e​/​r​e​c​h​e​r​c​h​e​/​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​l​i​n​s​t​i​t​u​t​-​c​u​r​i​e​-​i​m​p​l​e​m​e​n​t​e​-​l​e​s​-​o​u​t​i​l​s​-​d​i​b​e​x​-​m​e​dical )
3Inserm press release, June 2023 (https://​presse​.inserm​.fr/​u​n​e​-​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​p​o​u​r​-​p​r​e​d​i​r​e​-​l​e​-​v​i​e​i​l​l​i​s​s​e​m​e​n​t​/​6​7138/)
4Decem­ber 2024 article on Info.gouv using the Kano­pee appli­ca­tion as an example (https://​www​.info​.gouv​.fr/​a​c​t​u​a​l​i​t​e​/​l​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​a​u​-​s​e​r​v​i​c​e​-​d​e​-​l​a​-​s​a​n​t​e​-​m​e​ntale )
5The pro­ject code ERP-2021–23680708 – ERP-2021-ERAPERMED2021-MIRACLE.
6Inter­view, Februa­ry 2024 : bio­in­for­ma­ti­cians from the MIRACLE pro­ject, Cen­ter for Sca­lable Data Ana­ly­tics and Arti­fi­cial Intel­li­gence (ScaDS​.AI), Ger­ma­ny, Leip­zig.
7These inter­views were conduc­ted with patient asso­cia­tions out­side the MIRACLE pro­ject.

Support accurate information rooted in the scientific method.

Donate