π Health and biotech π Science and technology
Digital innovations for better health

Why the medical AI revolution could never happen

with Joël Perez Torrents, PhD student at I³-CRG* at École Polytechnique (IP Paris)
On November 16th, 2022 |
5 min reading time
PEREZ TORRENTS Joël
Joël Perez Torrents
PhD student at I³-CRG* at École Polytechnique (IP Paris)
Key takeaways
  • Artificial intelligence (AI) for medical applications has the potential to profoundly change healthcare practices in the long-term: diagnostics, treatment, and patient experience.
  • But, until now, most developments in AI have followed a continuity of medical efforts rather than completely overturning pre-existing methods.
  • Deployment of medical AI is reduced at both at the institutional and individual level by the conservative environment around healthcare, where innovation is slow-moving.
  • The technical nature of AI reduces the disruptiveness of new applications, as it uses already existing data.

Cur­rent deve­lop­ments in the medi­cal appli­ca­tions of arti­fi­cial intel­li­gence (AI) have been des­cri­bed by many as a “revo­lu­tion in medi­cine”. Yet even though advances in Machine Lear­ning – and spe­ci­fi­cal­ly Deep Lear­ning – pro­found­ly change how things are done, claims of a “revo­lu­tion” are undoub­ted­ly misleading. 

Moreo­ver, the idea, while it is appea­ling, sug­gests that medi­cine is bound to adhere to a para­digm shift orien­ted sole­ly around the avai­lable tech­no­lo­gy. Whe­reas the way AI changes medi­cine could be dri­ven by a more col­lec­tive effort of those invol­ved. If this were to be the case, the actual trans­for­ma­tion would be an ongoing change for the decades to come rather of an ins­tan­ta­neous shift. As such, uptake of AI for medi­cal pur­poses is unli­ke­ly to be the “revo­lu­tion” we have been pro­mi­sed. Here is why. 

#1 Changes in medical practise are slow

Pre­ci­sion medi­cine is a long-stan­ding effort to leve­rage data for bet­ter treat­ments. Pio­nee­red by Karl Pear­son and Fran­cis Gal­ton in the late 19th Cen­tu­ry, they were the first to col­lect data with the expli­cit objec­tives to sta­tis­ti­cal­ly ana­lyse them. Since then, among­st many efforts, during the second half of the 20th Cen­tu­ry, the Natio­nal Ins­ti­tute of Health (NIH) in the USA deve­lo­ped a wide range of sta­tis­ti­cal methods for pre­ci­sion medi­cine. The Human Genome Pro­ject in the 2000, and the deve­lop­ment of more advan­ced hard­ware and soft­ware to sup­port AI can the­re­fore be consi­de­red as a conti­nui­ty of these efforts.

AI comes as a conti­nui­ty of pre­vious tech­no­lo­gies, rather than an imme­diate revolution.

Of the many examples of AI in medi­cine, radio­lo­gy is ano­ther com­pel­ling example. The dis­ci­pline deve­lo­ped with the Nobel-prize dis­co­ve­ry of X‑rays by Rönt­gen in 1895. Radio­gra­phy was the only tech­nique for non-inva­sive medi­cal ima­ging for almost 50 years. Over time, the gain of pre­ci­sion and ease of use made it one of the prime choices for diag­no­sis in many cases. For ins­tance, sur­geons used to diag­nose appen­di­ci­tis through touch alone, but now CT scans are the pre­fer­red choice. As with for pre­ci­sion medi­cine, AI comes as a conti­nui­ty in those deve­lop­ments as did the intro­duc­tion of pre­vious tech­no­lo­gies before it – rather than an imme­diate “revo­lu­tion”. 

#2 Hospitals are resistant to change 

Radio­lo­gy is one of – if not the – first medi­cal dis­ci­plines in which the new gene­ra­tion of AI tools is being com­mer­cia­li­sed. The first scien­ti­fic papers detai­ling proof-of-concepts using Deep Lear­ning on radio­gra­phies were publi­shed in the ear­ly 2010s. And now, a decade later, the first tools are hit­ting the mar­ket. So, how did that hap­pen ? In part, the tech­no­lo­gy has matu­red but also changes were requi­red on an admi­nis­tra­tive level.

Even though some appli­ca­tions of the tech­no­lo­gy were deve­lo­ped five years ago, a span of time and invest­ment was nee­ded not only to build the AI but to obtain cer­ti­fi­ca­tion by regu­la­to­ry ins­ti­tu­tions to use it. Now, autho­ri­sa­tions are being deli­ve­red qui­cker since both par­ties lear­ned how to prove the vali­di­ty of AI appli­ca­tions – some­times it is sim­ply a mat­ter of seve­ral months. Never­the­less, buyers and users still need to check the uti­li­ty of the tool in their wor­king context – i.e., patients’ needs and change of prac­tise across hospitals.

Fur­ther­more, hos­pi­tals need to find funds to pay for AI tools ; since they amount to new prac­tices, there is often no bud­get in place to pay for them. A hos­pi­tal can take a year or more in admi­nis­tra­tive pro­cesses before buying AI and, although regu­la­to­ry ins­ti­tu­tions may have vali­da­ted the safe­ty of a pro­duct, there remain few cases where those devices are reim­bur­sed. The more novel, or the higher the “revo­lu­tio­na­ry” poten­tial, the higher the bar­riers rise in the medi­cal set­ting due to its conser­va­tive culture with regards to safe­ty. The medi­cal sys­tem pur­sues accu­ra­cy, an anta­go­nist to the uncer­tain­ty of innovation.

#3 Data requirements are time-consuming

Rol­lout of medi­cal AI appli­ca­tions is also limi­ted by inherent cha­rac­te­ris­tics of the tech­no­lo­gy. To start, data must be pro­du­ced, and legal fra­me­works makes crea­ting your own data­set very com­pli­ca­ted – not to men­tion the time and money requi­red. In many cases, deve­lo­pers go for “secon­da­ry use”. This term refers data not ori­gi­nal­ly pro­du­ced for the pur­pose with which it is being used such as diag­no­sis or admi­nis­tra­tive paper­work – not spe­ci­fi­cal­ly for AI appli­ca­tions. Howe­ver, this means that efforts are nee­ded to clean the data, whil­st still facing the many bar­riers : GDPR, autho­ri­sa­tion accesses, mone­ti­sa­tion, etc. 

In addi­tion, get­ting the data­set is only a miles­tone. Medi­cal experts are nee­ded to label data and help deve­lo­pers make sense of the results. Many ite­ra­tions bet­ween data treat­ment and models are requi­red before rea­ching a valid result. A rule of thumb esti­mates a proof of concept in AI is 80% data pre-treat­ment and 20% model­ling. Add to that the proof-of- miles­tone sta­ted above nee­ded for regu­la­to­ry bodies as well as users who must be convin­ced too.

Last­ly, the scope of appli­ca­tions works best when per­for­med tasks are nar­row. The wider the scope, the more com­pli­ca­ted and uncer­tain the deve­lop­ment. For ins­tance, cur­rent appli­ca­tions in radio­lo­gy are often limi­ted to detec­ting a spe­ci­fic area of inter­est in the body. Much of the time results there are false posi­tives and com­pli­ca­ted cases are sel­dom hand­led, such as breast-implants that often block mam­mo­gra­phy AI analysis.

#4 Uses of AI are still unclear

The ear­ly adop­tion of AI in radio­lo­gy appears in a conti­nuum of new prac­tices. On one side, AI par­tial­ly replaces the radio­lo­gist : some emer­gen­cy ser­vices might use such tools to treat inco­ming patients when there is no radio­lo­gist on duty who could check results at a later stage. On the other, if AI serves for triage pur­poses it could be used as a second opi­nion to avoid a false nega­tive. The dif­fe­rence in use could deter­mine what is expec­ted of the tool – pre­ci­sion, recall and other metrics would be cali­bra­ted accordingly.

Rela­ted issues, like res­pon­si­bi­li­ty, are still not for­mal­ly addressed.

Such ques­tions and rela­ted issues, like res­pon­si­bi­li­ty, are still not for­mal­ly addres­sed. And how this pans out will ulti­ma­te­ly affect AI deve­lop­ment and use. Indeed, reliance on tools and auto­ma­tion is found to decrease exper­tise. If today senior radio­lo­gists can dis­tin­guish true from false in AI ana­ly­sis, it is not the case for less-expe­rien­ced ones. As young gene­ra­tions conti­nue to rely on it, they might become more dependent and less cri­tic about their results, which could be at the bene­fit of other skills.

Revolution or transformation ?

Thin­king there is a “revo­lu­tion” in which we can­not influence the course of action is limi­ting. How AI appli­ca­tions deve­lop may have more to do with cur­rent constraints on heal­th­care ser­vices – lack of per­son­nel, fun­ding, and resources – rather than opti­mal appli­ca­tions for medi­cine in gene­ral. Never­the­less, com­mis­sions are for­med1 at the natio­nal and inter­na­tio­nal level to regu­late such issues, and per­haps other forms of col­lec­tive deci­sion could also be actio­ned, like the “com­mu­ni­ties of inqui­ry” deve­lo­ped by prag­ma­tists as the cor­ners­tone of demo­cra­tic life2 .

The argu­ments pro­vi­ded above do not enti­re­ly dis­cre­dit the pos­si­bi­li­ty that a dis­rup­tive appli­ca­tion for AI could appear that would “revo­lu­tio­nise” medi­cal care. Yet, they do try to put the rate cur­rent deve­lop­ments into context and ins­cribe them in the decades-long pro­cess of the tra­di­tio­nal inno­va­tion that hap­pens in heal­th­care. And, more impor­tant­ly, how the tran­si­tion is not bound to evolve based sole­ly on the tech­no­lo­gy – the slow rate of uptake in those inno­va­tions allow room for a col­lec­tive action to how AI is used in healthcare.

1https://​eithealth​.eu/​o​p​p​o​r​t​u​n​i​t​y​/​c​a​l​l​-​f​o​r​-​a​p​p​l​i​c​a​t​i​o​n​s​-​e​x​t​e​r​n​a​l​-​a​d​v​i​s​o​r​y​-​g​r​o​u​p​-​t​o​-​t​h​e​-​e​u​r​o​p​e​a​n​-​t​a​s​k​f​o​r​c​e​-​f​o​r​-​h​a​r​m​o​n​i​s​e​d​-​e​v​a​l​u​a​t​i​o​n​-​o​f​-​d​i​g​i​t​a​l​-​m​e​d​i​c​a​l​-​d​e​v​i​c​e​s​-​dmds/
2Prag­ma­tism and Orga­ni­za­tion Stu­dies, 2018, Phi­lippe Lor­nio, chap­ter 6, pp.158–188.

Support accurate information rooted in the scientific method.

Donate