Home / Chroniques / AI in medical decision-making: a question of ethics?
Enhancing medical cybersecurity with generative ai for data safety in healthcare and life insurance
π Health and biotech π Science and technology

AI in medical decision-making: a question of ethics?

Damien Lacroux
Damien Lacroux
philosopher of science and researcher at the UNESCO Chair in the Ethics of the Living and the Artificial
Key takeaways
  • Artificial intelligence is gradually becoming an integral part of predictive and personalised medicine, and in helping medical professionals make therapeutic decisions.
  • The aim of the MIRACLE project is to identify the risk of recurrence in lung cancer patients, using an algorithm to aid medical decision-making.
  • To achieve this, the algorithm is fed with a large amount of patient data – the more data there is, the narrower the algorithm’s margin of error.
  • But the more powerful the AI, the more opaque it becomes for practitioners, who have no way of understanding what data has led to the probability of recurrence proposed by the AI.
  • AI therefore raises ethical issues concerning transparency in medicine, where the main fear of patients remains that the machine will impose a diagnosis without human intervention.

Arti­fi­cial intel­li­gence to detect breast can­cer1 or pro­state can­cer2, algorithms that cal­cu­late our physiolo­gic­al age to pre­dict age­ing3 or con­ver­sa­tion­al agents to mon­it­or our men­tal health4… Arti­fi­cial intel­li­gence tools are gradu­ally becom­ing part of med­ic­al prac­tice. The focus is on pre­dict­ive and per­son­al­ised medi­cine, as well as thera­peut­ic decision-mak­ing by the med­ic­al pro­fes­sion. But how do patients per­ceive this rela­tion­ship between doc­tors and AI? How do prac­ti­tion­ers really inter­act with the technology?

These are the ques­tions posed by Dami­en Lac­roux, philo­soph­er of sci­ence and research­er at the UNESCO Chair in the Eth­ics of the Liv­ing and the Arti­fi­cial. “Dur­ing my inter­views, I noticed that patients ima­gine a par­tic­u­lar rela­tion­ship between doc­tors and AI,” explains the research­er, who spe­cial­ises in the integ­ra­tion of algorithms in can­cer­o­logy. “We tend to believe that human spe­cial­ists in onco­logy delib­er­ate on our case before mak­ing a decision, and that the tech­no­logy inter­venes at a later stage to val­id­ate the delib­er­a­tion,” he explains. But is this really the case?

AI to prevent the risk of lung cancer recurrence

To find out, Dami­en Lac­roux spoke to sci­ent­ists from the MIRACLE5 pro­ject. This ambi­tiously named European study was launched in 2021 and brings togeth­er labor­at­or­ies in Italy, Spain, Ger­many and France. The aim is to identi­fy risk of recur­rence in lung can­cer patients, using an algorithm to help med­ic­al decision-mak­ing. To achieve this, research­ers are train­ing AI (machine learn­ing) in a super­vised man­ner. The algorithm is “fed” with data from a cohort of patients for whom the exist­ence or absence of recur­rence is known. The data inges­ted is of three types: clinico-patho­lo­gic­al data (such as the patient’s sex, the his­tory of their dis­ease or the treat­ments they may have under­gone); med­ic­al ima­ging data; and finally, omics data, i.e. a mass of inform­a­tion relat­ing to molecu­lar bio­logy (DNA or RNA from tumours).

Using a cohort of 220 patients, the sci­ent­ists feed the algorithm with all the data col­lec­ted, as well as inform­a­tion on wheth­er or not a recur­rence happened – and how long before it does. “Then we let the algorithm do its work! This involves an unima­gin­able amount of data, which is impossible for humans to pro­cess on their own,” explains Dami­en Lac­roux. “Today, the pro­ject is behind sched­ule, and we’ve only just fin­ished col­lect­ing data from the first cohort. We still have to start train­ing the algorithm with this data and then recruit a second cohort to val­id­ate its train­ing.” So, we’ll have to wait a little longer before we see the MIRACLE pro­ject in action.

AI: a black box for medical decision-making

But the way it works imme­di­ately raises an eth­ic­al issue, which was poin­ted out by the research­ers inter­viewed by Dami­en Lac­roux. “At the start of train­ing, bioin­form­aticians man­age to slice up the data­sets and asso­ci­ate the AI res­ults with this or that input factor. But gradu­ally, the data increases and it becomes a black box.” This grow­ing volume of data makes the mod­els used to refine pre­dic­tions more com­plex. And therein lies the para­dox: as the amount of data increases, the algorithm’s mar­gin for error decreases. The AI is there­fore more effi­cient, but the way it works is less clear to prac­ti­tion­ers. How can they explain the decisions made by AI to patients, or guar­an­tee the absence of bias, if they them­selves are not famil­i­ar with its inner workings?

In the field of onco­logy, decision trees are often used to help doc­tors jus­ti­fy their clin­ic­al reas­on­ing. How­ever, the integ­ra­tion of algorithmic scores into these pro­cesses can con­flict with the need for trans­par­ency on the part of doc­tors, who some­times struggle to under­stand what input data has led the AI to estim­ate the prob­ab­il­ity of recur­rence. “Even if we man­aged to decipher every intern­al cal­cu­la­tion in the algorithm, the res­ult would be so math­em­at­ic­ally com­plex that doc­tors would not be able to inter­pret it or use it in their clin­ic­al prac­tice,” explains a Ger­man bioin­form­atician work­ing on the MIRACLE pro­ject, inter­viewed6 by Dami­en Lac­roux in his forth­com­ing study.

This also affects the notion of informed patient con­sent. “The doc­tor is obliged to provide suf­fi­cient inform­a­tion to enable the patient to accept or refuse treat­ment. How­ever, if the prac­ti­tion­er is not fully informed, this poses an eth­ic­al prob­lem,” adds the philo­soph­er. And yet, as Dami­en Lac­roux points out in his study: “Molecu­lar bio­logy has iden­ti­fied the need to take into account thou­sands of pieces of patient omic data as an essen­tial means of mak­ing pro­gress in onco­logy.” AI would there­fore enable bet­ter man­age­ment of the poten­tial evol­u­tion of the dis­ease, by refin­ing the pro­posed treat­ments… at the expense of trust between doc­tors and patients.

The importance of having humans in the driver’s seat

Wheth­er AI is integ­rated into the med­ic­al delib­er­a­tion pro­cess (what Dami­en Lac­roux calls “ana­lyt­ic­al deliberation”in his art­icle) or wheth­er it is totally out­side the decision-mak­ing pro­cess and only inter­venes as a final con­sulta­tion (“syn­thet­ic delib­er­a­tion”), there must be total trans­par­ency as far as patients are con­cerned. The main fear raised by the research­er dur­ing group inter­views with patients7 remains that “the machine” will make the dia­gnos­is without human inter­ven­tion. “But this is not at all the case today,” reas­sures Dami­en Lacroux.

These algorithmic scores, which pro­pose a prob­ab­il­ity of can­cer recur­rence based on patient data, also raise oth­er ques­tions spe­cif­ic to pre­dict­ive medi­cine: when are we really cured? Can we really be free of the dis­ease when uncer­tainty per­sists, and we live in con­stant anti­cip­a­tion of a pos­sible recur­rence? These ques­tions, like so many oth­ers, have yet to be answered.

Sophie Podevin
1Insti­tut Curie art­icle, Decem­ber 2022 (https://​curie​.fr/​a​c​t​u​a​l​i​t​e​/​p​u​b​l​i​c​a​t​i​o​n​/​d​i​a​g​n​o​s​t​i​c​-​d​u​-​c​a​n​c​e​r​-​d​u​-​s​e​i​n​-​l​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​d​i​b​e​x​-​b​i​e​n​t​o​t​-​r​e​alite)
2Insti­tut Curie art­icle, Novem­ber 2024 (https://​curie​.fr/​a​c​t​u​a​l​i​t​e​/​r​e​c​h​e​r​c​h​e​/​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​l​i​n​s​t​i​t​u​t​-​c​u​r​i​e​-​i​m​p​l​e​m​e​n​t​e​-​l​e​s​-​o​u​t​i​l​s​-​d​i​b​e​x​-​m​e​dical )
3Inserm press release, June 2023 (https://​presse​.inserm​.fr/​u​n​e​-​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​p​o​u​r​-​p​r​e​d​i​r​e​-​l​e​-​v​i​e​i​l​l​i​s​s​e​m​e​n​t​/​6​7138/)
4Decem­ber 2024 art­icle on Info.gouv using the Kan­opee applic­a­tion as an example (https://​www​.info​.gouv​.fr/​a​c​t​u​a​l​i​t​e​/​l​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​a​u​-​s​e​r​v​i​c​e​-​d​e​-​l​a​-​s​a​n​t​e​-​m​e​ntale )
5The pro­ject code ERP-2021–23680708 – ERP-2021-ERAPERMED2021-MIRACLE.
6Inter­view, Feb­ru­ary 2024: bioin­form­aticians from the MIRACLE pro­ject, Cen­ter for Scal­able Data Ana­lyt­ics and Arti­fi­cial Intel­li­gence (ScaDS​.AI), Ger­many, Leipzig.
7These inter­views were con­duc­ted with patient asso­ci­ations out­side the MIRACLE pro­ject.

Support accurate information rooted in the scientific method.

Donate